diff --git a/Walkthrough Guide/01 Setup/README.md b/Walkthrough Guide/01 Setup/README.md index d1ea5df2..4e896308 100644 --- a/Walkthrough Guide/01 Setup/README.md +++ b/Walkthrough Guide/01 Setup/README.md @@ -44,7 +44,7 @@ Creating a Microsoft Azure Account is easy! Just head over to the [Microsoft Azu ![Free Azure Account](Assets/FreeAzureAccount.png) -Although the free Account includes a bunch of services that you can use, in this workshop we will work with advanced resources, which we need an Azure Subscription for. An Azure Subscriptuon is basically the way to pay for charged services and can be backed by a Credit Card or a company agreement. +Although the free Account includes a bunch of services that you can use, in this workshop we will work with advanced resources, which we need an Azure Subscription for. An Azure Subscription is basically the way to pay for charged services and can be backed by a Credit Card or a company agreement. You can check the Subscriptions for you account when visiting the [Azure Portal](https://portal.azure.com) and selecting ***Subscriptions*** from the side menu. @@ -56,13 +56,13 @@ If no Subscriptions appear, visit the [Azure Subscription Portal](https://accoun ### .NET Core -Most parts of this workshop are written in .NET Core 2.1 and we need to compile a few things from time to time. For this, we need to the [.NET Core SDK](https://www.microsoft.com/net/download/windows/build) installed. To check if the installation has been successful, open the *Terminal* or *Command Prompt* on your machine and type in +Most parts of this workshop are written in .NET Core 2.1 and we need to compile a few things from time to time. For this, we need the [.NET Core SDK](https://www.microsoft.com/net/download/windows/build) installed. To check if the installation has been successful, open the *Terminal* or *Command Prompt* on your machine and type in ```bash dotnet --info ``` -If the command line answers you similar like shown in the screenshot below, your machine can now run and compile .NET code. +If the command line responds in a similar way to what is shown in the screenshot below, your machine can now run and compile .NET code: ![Visual Studio Running Xamarin iOS and Android App](Assets/DotnetInfoBash.png) @@ -74,7 +74,8 @@ Note, that you need to ***Reload*** Visual Stuido Code after installing extensio ![Screenshot of Visual Studio Code for Reloading Extensions](Assets/VSCodeReloadExtensions.png) -Once the extensions has been installed successful and Visual Stuido Code has been reloaded, you should see a new ***Azure*** tab on the side. Select it and make sure that you are logged in with you Azure account. Please verify, that you see at least one of your subscriptions here. + +Once the extensions have been installed successfully and Visual Stuido Code has been reloaded, you should see a new ***Azure*** tab on the side. Select it and make sure that you are logged in with your Azure account. Please verify that you see at least one of your subscriptions here: ![Screenshot of Visual Studio Code showing Subscriptions in the Azure Tab](Assets/VSCodeAzureSubs.png) @@ -88,20 +89,20 @@ If you want to compile the Xamarin Application on you own, you will need to inst #### Windows -When working in Windows, Visual Studio will be the best IDE for you! You can check internally if you have a license for the paid versions or even go with the free Community Edition. Both will work for you. +When working in Windows, Visual Studio will be the best IDE for you! You can check internally to see if your company has a licence for the paid versions or even go with the free Community Edition. Both will work for you. -Please [follow this guide](https://developer.xamarin.com/guides/cross-platform/getting_started/installation/windows/) to install the Xamarin Tooling for Visual Studio on Windows and make sure, you have at least Android API Level 16 and an Android Emulator installed. +Please [follow this guide](https://developer.xamarin.com/guides/cross-platform/getting_started/installation/windows/) to install the Xamarin Tooling for Visual Studio on Windows and make sure you have at least Android API Level 16 and an Android Emulator installed. When working on Windows, you won't be able to build iOS solutions unless you connect your machine with a Mac in your network. To follow this workshop, an iOS configuration is not mandatory! [Follow this guide](https://developer.xamarin.com/guides/ios/getting_started/installation/windows/) if you want to connect to a Mac Build Host anyway. #### Mac -When using a Mac, the best Xamarin Tooling provides Visual Studio for Mac. Xamarin should be installed during the installation of Visual Studio. Please [follow this guide](https://docs.microsoft.com/en-us/visualstudio/mac/installation) to make sure you don't miss anything. +When using a Mac, the best Xamarin Tooling is available in Visual Studio for Mac. Xamarin should be installed during the installation of Visual Studio. Please [follow this guide](https://docs.microsoft.com/en-us/visualstudio/mac/installation) to make sure you don't miss anything. -If you want to build iOS solutions, make sure that XCode is also installed on the same device! +If you want to build iOS solutions, make sure that Xcode is also installed on the same device! #### Test your installation -To make sure your environment works as expected and is able to compile and execute Xamarin apps, your can simply open the [`ContosoMaintenance.sln`](/ContosoMaintenance.sln) solution with Visual Studio and select the `ContosoFieldService.iOS` or `ContosoFieldService.Droid` project as your Startup project. If the application gets compiled and the app can be started, you are good to go. +To make sure your environment works as expected and is able to compile and execute Xamarin apps, you can simply open the [`ContosoMaintenance.sln`](/ContosoMaintenance.sln) solution with Visual Studio and select the `ContosoFieldService.iOS` or `ContosoFieldService.Droid` project as your Startup project. If the application gets compiled and the app can be started, you are good to go. ![Visual Studio Running Xamarin iOS and Android App](Assets/VSMacRunningiOSandAndroid.png) diff --git a/Walkthrough Guide/02 Architecture Options/README.md b/Walkthrough Guide/02 Architecture Options/README.md index 055be3f3..fc37abfe 100644 --- a/Walkthrough Guide/02 Architecture Options/README.md +++ b/Walkthrough Guide/02 Architecture Options/README.md @@ -5,8 +5,7 @@ Deciding how to architect a solution isn't an easy task and depending on who you We're looking for a solution that allows us lots of flexibility with minimal maintenance. We're interested in focusing on the business problem rather than deploying and maintaining a set of virtual machines. -It's for the reason that we'll opt to use Platform as a Service (PaaS) as much as possible within our design. - +It's for this reason that we'll opt to use Platform as a Service (PaaS) as much as possible within our design. ## The real architecture @@ -14,7 +13,8 @@ It's for the reason that we'll opt to use Platform as a Service (PaaS) as much a Above you can see a high-level overview of our production architecture. Some key decisions: ### Orchestration -We were going to leverage our .NET skills and build a ASP.NET Web API targeting .NET Core; we've lots of flexibility on where and how to host the code. + +We are going to leverage our .NET skills and build a ASP.NET Web API targeting .NET Core; we've lots of flexibility on where and how to host the code. We picked Azure App Service as it supports great IDE integration for both Visual Studio PC and Visual Studio Mac, as well as offering all the PaaS goodness we need to focus on other parts of the solution. @@ -48,9 +48,9 @@ If you're interested in helping, then please reach out to us! Learn more about [Service Fabric](https://azure.microsoft.com/en-us/services/service-fabric/) ## Connecting to remote resources securely -ExpressRoute is an Azure service that lets you create private connections between Microsoft datacenters and infrastructure that’s on your premises or in a colocation facility. ExpressRoute connections do not go over the public Internet, instead ExpressRoute uses dedicated connectivity from your resources to Azure. This provides reliability and speeds guarantees with lower latencies than typical connections over the Internet. Microsoft Azure ExpressRoute lets you extend your on-premises networks into the Microsoft cloud over a private connection facilitated by a connectivity provider. Connectivity can be from an any-to-any (IP VPN) network, a point-to-point Ethernet network, or a virtual cross-connection through a connectivity provider at a co-location facility. +ExpressRoute is an Azure service that lets you create private connections between Microsoft datacenters and infrastructure that’s on your premises or in a co-location facility. ExpressRoute connections do not go over the public Internet, instead ExpressRoute uses dedicated connectivity from your resources to Azure. This provides reliability and speeds guaranteed with lower latencies better than typical connections over the Internet. Microsoft Azure ExpressRoute lets you extend your on-premises networks into the Microsoft Cloud over a private connection facilitated by a connectivity provider. Connectivity can be from an any-to-any (IP VPN) network, a point-to-point Ethernet network, or a virtual cross-connection through a connectivity provider at a co-location facility. -Microsoft uses industry standard dynamic routing protocol (BGP) to exchange routes between your on-premises network, your instances in Azure, and Microsoft public addresses. We establish multiple BGP sessions with your network for different traffic profiles. The advantage of ExpressRoute connections over S2S VPN or accessing Microsoft cloud services over internet are as follows; +Microsoft uses industry standard dynamic routing protocol (BGP) to exchange routes between your on-premises network, your instances in Azure, and Microsoft public addresses. We establish multiple BGP sessions with your network for different traffic profiles. The advantage of ExpressRoute connections over S2S VPN or accessing Microsoft cloud services over internet are as follows: * more reliability * faster speeds @@ -62,9 +62,9 @@ Bandwidth options available in ExpressRoute are 50 Mbps, 100 Mbps, 200 Mbps, 500 ![Express Route Connectivity Model](Assets/ERConnectivityModel.png) -There are three ways to connect customer’s on-premise infrastructure to Azure (or microsoft cloud services) using ExpressRoute, they are; +There are three ways to connect customer’s on-premise infrastructure to Azure (or microsoft cloud services) using ExpressRoute, they are: -1. WAN integration (or call IPVPN or MPLS or any-to-any connectivity) +1. WAN integration (or call IPVPN, MPLS or any-to-any connectivity) 2. Cloud Exchange through Co-Location Provider 3. Point-to-Point Ethernet Connection diff --git a/Walkthrough Guide/03 Web API/README.md b/Walkthrough Guide/03 Web API/README.md index 1e7495dc..3f023428 100644 --- a/Walkthrough Guide/03 Web API/README.md +++ b/Walkthrough Guide/03 Web API/README.md @@ -4,19 +4,19 @@ Azure App Service is Microsoft’s fully managed, highly scalable platform for hosting web, mobile and API apps built using .NET, Java, Ruby, Node.js, PHP, and Python or Docker containers. -App Service is fully managed and allows us to set the maximum number of instances on which we want to run our backend app on. Microsoft will then manage the scaling and load balancing across multiple instances to ensure your app perform well under heavy load. Microsoft manages the underlying compute infrastructure required to run our code, as well as patching and updating the OS and Frameworks when required. +App Service is fully managed and allows us to set the maximum number of instances that we want to run our backend app on. Microsoft will then manage the scaling and load-balancing across multiple instances to ensure your app will perform well under heavy load. Microsoft manages the underlying compute infrastructure required to run our code, as well as patching and updating the OS and Frameworks when required. ## 1. Resource Group Before we can deploy an App Service instance, we need to create a resource group to hold today's services. Resource groups can be thought of as logical folders for your Azure Services (Resources). You may wish to create separate resource groups per department, or you may want to have one resource group per project. Resource groups are great for grouping all the services associated with a solution together. During development, it means you can quickly delete all the resources in one operation! -In this workshop, we’ll be deploying just one resource group to manage all of our required services. When in production, it means we can see how much the services are costing us and how the resources are being used. +In this workshop, we’ll be deploying just one resource group to manage all of our required services. When done like this in production it means we can see how much the services are costing us and how the resources are being used. ### 1.1 Create a new Resource Group ![Create a new Resource Group](Assets/CreateResourceGroup.png) -Navigate to the [portal.azure.com](portal.azure.com) and sign in with your credentials. +Navigate to [portal.azure.com](portal.azure.com) and sign in with your credentials. 1. Click ***Resource groups*** in the top-left corner. 2. Click ***Add*** to bring up configuration pane. @@ -31,21 +31,21 @@ And voilà, we are done. Now we can start to add services to our newly created R ## 2. Create a new App Service (Web App) -Web Apps are one of the App Services, that we can deploy to Azure. They can be configured easily at the [Azure Portal](https://portal.azure.com). You can find them, by clicking the ***Create a resource*** button at the top-left corner and selecting the ***Web*** category. +Web Apps are one of the App Services that we can deploy to Azure. They can be configured easily at the [Azure Portal](https://portal.azure.com). You can find them, by clicking the ***Create a resource*** button at the top-left corner and selecting the ***Web*** category. ![Create new App Service Web App](Assets/CreateNewAppService.png) ### 2.1 Configure your App Service -As you can see in the configuration blade, we have to configure a few things, before creating a new App Service, as App Name, Subscription, Resource Group, OS and App Service Plan / Location. Let's go through all of them in detail quickly, to understand, what we are configuring here. +As you can see in the configuration blade, we have to configure a few things before creating a new App Service such as App Name, Subscription, Resource Group, OS and App Service Plan / Location. Let's go through all of them in detail quickly to understand what we are configuring here. #### App name -This is the name of your application and as you can see, the name will resolve into an web address like `yourname.azurewebsites.net`. After creation of your App Service, it weill be publically available at this address. Of course, you can also assign a custom domain to it later. +This is the name of your application and as you can see, the name will resolve into a web address like `yourname.azurewebsites.net`. After creation of your App Service, it will be publicly available at this address. Of course, you can also assign a custom domain to it later. #### Subscription -By the end of the day, someone has to pay for all these services, that we are provisioning. Behind every Azure Subscription is a payment model that takes care of our cost. One Azure Account can have multiple Subscriptions. +At the end of the day someone has to pay for all these services that we are provisioning. Behind every Azure Subscription is a payment model that takes care of our cost. One Azure Account can have multiple Subscriptions. #### Resource Group @@ -55,11 +55,11 @@ We have learned about the concept of Resource Groups earlier in this module. Dur App Services, can be based on Windows, Linux or Docker as their core technology. This becomes important, when taking a look at the programming framework, that we are using for the application's logic itself. While .NET Framework for example only runs on Windows, Node.js is more performant on a Linux host. If we want to provide a Docker container instead of deploying our application directly to the App Service, we can do that as well. -> **Hint:** At this workshop, the Backend API Logic is written with .NET Core, which runs cross-platform. So you can choose both, Windows and Linux. We also provide it as a Docker image, so you can also choose Docker as the operation system of your App Service. Just choose, whatever you are most interested in! +> **Hint:** At this workshop, the Backend API Logic is written with .NET Core, which runs cross-platform. So you can choose both, Windows and Linux. We also provide it as a Docker image, so you can also choose Docker as the operating system of your App Service. Just choose whatever you are most interested in! #### App Service Plan -An App Service, is just the logical instance of an application, so it has to run within an **App Service Plan**, which is provides the actual hardware for it. You can run multiple App Services within the same App Service Plan, if you want to, but be aware, that they share the App Service Plan's Resources then. We will create an App Service Plan step by step in the following sections of this module. +An App Service is just the logical instance of an application, so it has to run within an **App Service Plan** which provides the actual hardware for it. You can run multiple App Services within the same App Service Plan if you want to, but be aware that they share the App Service Plan's Resources then. We will create an App Service Plan step by step in the following sections of this module. ### 2.2 Create an App Service Plan @@ -77,7 +77,7 @@ Fill in the following values: Creating an App Service Plan is easy, but we have to consider where our users are? We want our services to be running as close to our users as possible as this dramatically increases performance. We also need to consider how much Compute resources we think we'll need to meet demand. -Clicking ***Pricing Tier***, shows all the different options we have (it's a lot!). I won't list what their differences are as their listed in the portal, but keep it mind, with the cloud we don't need to default to over-provisioning. We can scale up later if we have to! For this workshop, a B1 Basic site will be more than enough to run this project. More complex development projects should use something in the Standard range of pricing plans. Production apps should be set up in Standard or Premium pricing plans. +Clicking ***Pricing Tier*** shows all the different options we have, it's a lot! I won't list what their differences are as they're listed in the portal but keep in mind with the cloud we don't need to default to over-provisioning. We can scale up later if we have to! For this workshop, a B1 Basic site will be more than enough to run this project. More complex development projects should use something in the Standard range of pricing plans. Production apps should be set up in Standard or Premium pricing plans. ![Select App Service Plan Pricing Tier](Assets/SelectAppServicePlanTier.png) @@ -100,7 +100,7 @@ Because my app name was: "myawesomestartupapi", the unique URL would be: `https: ## 3. Deploy your apps to App Service -Azure App Service has many options for how to deploy our code. These include continuous integration, which can link to Visual Studio Team Services or GitHub. We could also use FTP to upload the project, but we're not animals, so we won't. +Azure App Service has many options for how to deploy our code. These include continuous integration which can link to Visual Studio Team Services or GitHub. We could also use FTP to upload the project but we're not animals, so we won't. The good news is: The full ASP.NET Core WebAPI code for the backend logic is already written for us and is located in the `Backend/Monolithic` folder of the workshop. But before we can upload it to the cloud, we need to **compile** it to make it machine readable, or **create a Docker image** for it. We will go through both options during this module. @@ -114,7 +114,7 @@ We quickly have to dive into the .NET Developer's world! For this, right-click t dotnet build ``` -The output should look like this and we should see the **Build succeeded** message. +The output should look like this and we should see the **Build succeeded** message: ![VSCode run dotnet build](Assets/VSCodeDotnetBuild.png) @@ -124,7 +124,7 @@ Building (compiling) the code generated two more folders for us: `/bin` and `/ob dotnet publish ``` -Once this command ran successfully, we have everything we need. Inside our `Monolithic` folder, we should now find a `bin/Debug/netcoreapp2.0/publish` folder that contains our ready-to-run backend logic. Now you can simply right-click this `publish` folder and select ***Deploy to Web App***. +Once this command has run successfully, we have everything we need. Inside our `Monolithic` folder we should now find a `bin/Debug/netcoreapp2.0/publish` folder that contains our ready-to-run backend logic. Now you can simply right-click this `publish` folder and select ***Deploy to Web App***. ![VSCode Deploy to Web App](Assets/VSCodePublishWebApp.png) @@ -151,11 +151,11 @@ To create a new registry, open the [Azure Portal](https://portal.azure.com), cli Click the ***Create*** button and wait until your Container Registry got provisioned. -In the ***Keys*** section of your Container Registry, you will find important information, like **Registry Name**, **Login Server**, **Username** and **Password**, that you will need to tag and upload a Docker image to it. +In the ***Keys*** section of your Container Registry, you will find important information like **Registry Name**, **Login Server**, **Username** and **Password**, that you will need to tag and upload a Docker image to. ![Create an Azure Container Registry](Assets/AzureContainerRegistryKeys.png) -In your Command Line, run the following command, to log into your freshly created Container Registry. Make sure, to replace `myawesomestartup.azurecr.io` with your **Login Server**. +In your Command Line, run the following command to log into your freshly created Container Registry: (make sure to replace `myawesomestartup.azurecr.io` with your **Login Server**.) ```bash docker login myawesomestartup.azurecr.io -u -p @@ -169,7 +169,7 @@ Right-click the `Monolithic` folder in Visual Studio Code and select ***Open in docker image build -t myawesomestartup.azurecr.io/contosomaintenance/api:latest . ``` -That triggers the creation process of the Docker image, based on the Dockerfile in the repository. During that process, the official [.NET Core SDK Docker Image](https://hub.docker.com/r/microsoft/dotnet/) gets downloaded from Dockerhub and the code will be compiled in there. To verify, that the image got created successfully, you can list all images on your machine with the following command. +That triggers the creation process of the Docker image, based on the Dockerfile in the repository. During that process, the official [.NET Core SDK Docker Image](https://hub.docker.com/r/microsoft/dotnet/) gets downloaded from Dockerhub and the code will be compiled in there. To verify that the image got created successfully, you can list all images on your machine with the following command. ```bash docker images @@ -179,27 +179,27 @@ The output should contain your image. ![List of local Docker images](Assets/ListDockerImages.png) -Now we can push the image do our Azure Container Registry with the following command. +Now we can push the image to our Azure Container Registry with the following command: ```bash docker push myawesomestartup.azurecr.io/contosomaintenance/api ``` -Next, we open the [Azure Portal](https://portal.azure.com) and navigate to your Docker based App Service, that you have created earlier. When you scroll down to the ***Container Settings*** on the left side, you can find a configuration for image sources (like Azure Container Registry or Docker Hub). +Next, we open the [Azure Portal](https://portal.azure.com) and navigate to your Docker based App Service that you have created earlier. When you scroll down to the ***Container Settings*** on the left-hand side, you can find a configuration for image sources (like Azure Container Registry or Docker Hub). ![Select Container in App Service](Assets/SelectContainerAppService.png) Here we can connect to our Container Registry. Select our container and ***Save*** the settings. -> **Hint:** You can enable ***Continuous Deployment*** at the bottom of the Container Settings, to update the application automatically, when a new version of your container gets pushed to the Container Registry. +> **Hint:** You can enable ***Continuous Deployment*** at the bottom of the Container Settings to update the application automatically when a new version of your container gets pushed to the Container Registry. -### 3.2 Verify, your app is running +### 3.2 Verify your app is running After a few seconds, after refreshing the browser, your Web App should display the published code and look like this: ![Deployed API with Swagger UI](Assets/DeployedWebAPI.png) -To test if the deployment is work and the app is accepting HTTP requests correctly, let's go ahead and navigate to the **/api/ping** endpoint. In my case, I'll use the following URL: `http://myawesomestartupapi.azurewebsites.net/api/ping`. +To test if the deployment is working and the app is accepting HTTP requests correctly, let's go ahead and navigate to the **/api/ping** endpoint. In my case, I'll use the following URL: `http://myawesomestartupapi.azurewebsites.net/api/ping`. ![Deployed API with no UI](Assets/AppServiceDeploymentTest.png) @@ -223,18 +223,18 @@ This shows that the backend is responding as expected! Before we move onto deplo You've now deployed your first App Service instance! We'll now review some 'Pro tips' to help you get the most out of your Azure service. ## Controlling Density -Most users will have a low number (usually less than 10) applications per App Service Plan. In scenarios where you expect you'll be running many more applications, it's crucial to prevent over-saturating the underlying compute capacity. +Most users will have a low number (usually fewer than 10) applications per App Service Plan. In scenarios where you expect you'll be running many more applications, it's crucial to prevent over-saturating the underlying compute capacity. Let's imagine that we've deployed one instance of our admin web portal and two instances of our mobile web API to the same App Service Plan. By default, all apps contained in a given App Service Plan will run on all the available compute resources (servers) allocated. If we only have a single server in our App Service Plan, we'll find that this single server will run all our available apps. Alternatively, if we scale out the App Service Plan to run on two servers, we'll run all our applications (3 apps) on both sets of servers. -This approach is absolutely fine if you find that your apps are using approximately the same amount of compute resources. If this isn't the case, then you may find that one app is consuming the lions share of compute resources, thus degrading the entire system performance. In our case, the mobile API will likely drive significant consumption of server resources, so we need to mitigate its effects on the performance of the admin portal. +This approach is absolutely fine if you find that your apps are using approximately the same amount of compute resources. If this isn't the case, then you may find that one app is consuming the lion's share of compute resources, thus degrading the entire system performance. In our case, the mobile API will likely drive significant consumption of server resources, so we need to mitigate its effects on the performance of the admin portal. To do this, what we can do is move lower-volume applications (such as the portal) into a single App Service Plan running on a single compute resource. Place high demand apps into an App Service Plan which is configured to auto-scale based on CPU and memory utilisation. ## Per-App Scaling -Another alternative for running large numbers of applications more efficiently is to use the per-app scaling feature of Azure App Service. We've [documententation](https://msdn.microsoft.com/en-us/magazine/mt793270.aspx) that covers per-app scaling in detail. Per-App scaling lets you control the maximum number of servers allocated to a given application, and you can do so per application. In this case, an application will run on the defined maximum number of servers and not on all available servers. +Another alternative for running large numbers of applications more efficiently is to use the per-app scaling feature of Azure App Service. We have [documententation](https://msdn.microsoft.com/en-us/magazine/mt793270.aspx) that covers per-app scaling in detail. Per-App scaling lets you control the maximum number of servers allocated to a given application, and you can do so per application. In this case an application will run on the defined maximum number of servers and not on all available servers. ## Application Slots App Service has a feature called [deployment slots](https://docs.microsoft.com/en-gb/azure/app-service/web-sites-staged-publishing). In a nutshell, a deployment slot enables you to have another application (slot) other than your production app. It’s another application that you can use to test new code before swapping into production. @@ -253,9 +253,9 @@ If resource competition is scoped just to scenarios such as running load tests, * When the non-production slot is ready to be swapped into production, move it back to the same App Service Plan running the production slot. Then the slot swap operation can be carried out. ## Deploying to Production with no downtime -You have a successful application running on an App Service Plan, and you have a great team to make updates to your application on a daily basis. In this case, you don’t want to deploy bits directly into production. You want to control the deployment and minimize downtime. For that, you can use your application slots. Set your deployment to the “pre-production” slot, which can be configured with production setting, and deploy your latest code. You can now safely test your app. Once you’re satisfied, you can swap the new bits into production. The swap operation doesn’t restart your application, and in return, the Controller notifies the front-end load balancer to redirect traffic to the latest slots. +You have a successful application running on an App Service Plan, and you have a great team to make updates to your application on a daily basis. In this case, you don’t want to deploy bits directly into production. You want to control the deployment and minimize downtime. For that, you can use your application slots. Set your deployment to the “pre-production” slot, which can be configured with production settings, and deploy your latest code. You can now safely test your app. Once you’re satisfied, you can swap the new bits into production. The swap operation doesn’t restart your application, and in return, the Controller notifies the front-end load balancer to redirect traffic to the latest slots. -Some applications need to warm up before they can safely handle production load—for example, if your application needs to load data into cache, or for a .NET application to allow the .NET runtime to JIT your assemblies. In this case, you’ll also want to use application slots to warm up your application before swapping it into production. +Some applications need to warm up before they can safely handle production load, for example, if your application needs to load data into cache, or for a .NET application to allow the .NET runtime to JIT your assemblies. In this case, you’ll also want to use application slots to warm up your application before swapping it into production. We often see customers having a pre-production slot that’s used to both test and warm up the application. You can use Continuous Deployment tools such as Visual Studio Release Manager to set up a pipeline for your code to get deployed into pre-production slots, run test for verification and warm all required paths in your app before swapping it into production. @@ -277,7 +277,7 @@ Address: 51.140.59.233 You’ll notice that an App Service scale unit is deployed on Azure Cloud Service (by the cloudapp.net suffix). WAWS stands for Windows Azure (when Azure was still called Windows) Web sites (the original name of App Service). ## Outbound Virtual IPs -Most likely your application is connected to other Azure and non-Azure services. As such, your application makes outbound network calls to endpoints, not on the scale unit of your application. This includes calling out to Azure services such as SQL Database and Azure Storage. There are up to five VIPs (the one public VIP and four outbound dedicated VIPs) used for outbound communication. You can’t choose which VIP your app uses, and all outbound calls from all apps in scale unit are using the five allocated VIPs. If your application uses a service that requires you to whitelist IPs that are allowed to make API calls into such a service, you’ll need to register all five VIPs of the scale unit. To view which IPs are allocated to outbound VIPs for a given unit of scale (or for your app from your perspective) go to the Azure portal, as shown in the below image. +Most likely your application is connected to other Azure and non-Azure services. As such, your application makes outbound network calls to endpoints, not on the scale unit of your application. This includes calling out to Azure services such as SQL Database and Azure Storage. There are up to five VIPs (the one public VIP and four outbound dedicated VIPs) used for outbound communication. You can’t choose which VIP your app uses, and all outbound calls from all apps in scale units are using the five allocated VIPs. If your application uses a service that requires you to whitelist IPs that are allowed to make API calls into such a service, you’ll need to register all five VIPs of the scale unit. To view which IPs are allocated to outbound VIPs for a given unit of scale (or for your app from your perspective) go to the Azure portal, as shown in the below image. ![Create new App Service Plan](Assets/OutboundVIP.png) diff --git a/Walkthrough Guide/04 Data Storage/README.md b/Walkthrough Guide/04 Data Storage/README.md index 4bd6c6d3..c0b24f6b 100755 --- a/Walkthrough Guide/04 Data Storage/README.md +++ b/Walkthrough Guide/04 Data Storage/README.md @@ -2,13 +2,13 @@ # Data Storage -As we are collecting and displaying different types of information like *Jobs*, *Parts*, *Users* and *photos*, we need to store them somewhere in the cloud. For this, we chose two different types of storages: **Blob Storage** for raw files like images and a **NoSQL Database** for storing unstructured data like Jobs. +As we are collecting and displaying different types of information like *Jobs*, *Parts*, *Users* and *photos*, we need to store them somewhere in the cloud. For this, we chose two different types of storage: **Blob Storage** for raw files like images and a **NoSQL Database** for storing unstructured data like Jobs. ## 1. Azure Cosmos DB for unstructured data -Whenever it comes to unstructured data an NoSQL approaches in the Microsoft Azure ecosystem, Cosmos DB should be our database of choice. It is a globally-distributed, multi-model database service which makes it super flexible to use and extremely easy to scale to other regions. +Whenever it comes to unstructured data and NoSQL approaches in the Microsoft Azure ecosystem, Cosmos DB should be our database of choice. It is a globally-distributed, multi-model database service which makes it super flexible to use and extremely easy to scale to other regions. -Beside *Disk Space* and *Consistency*, Cosmos DB's main scale dimension is *Throughput*. For each collection, developers can reserve throughput for their data, which ensures the 99.99th percentile of latency for reads to under 10 ms and for writes to under 15 ms. Pre-reserved Throughput which is defined by request units (RUs) is mainly determining the price of a Cosmos DB instance. Fetching of a single 1KB document by id spends roughly 1 RU. You can use the [Cosmos DB capacity planner tool](https://www.documentdb.com/capacityplanner) to calculate, how many RUs your database might need. +Beside *Disk Space* and *Consistency*, Cosmos DB's main scale dimension is *Throughput*. For each collection, developers can reserve throughput for their data, which ensures the 99.99th percentile of latency for reads to under 10 ms and for writes to under 15 ms. Pre-reserved Throughput which is defined by request units (RUs) is mainly determining the price of a Cosmos DB instance. Fetching of a single 1KB document by id spends roughly 1 RU. You can use the [Cosmos DB capacity planner tool](https://www.documentdb.com/capacityplanner) to calculate how many RU's your database might need. ### 1.1 Create a Cosmos DB instance @@ -32,19 +32,19 @@ After a few seconds, Azure should have created the database service and we can s #### 1.2.1 Scalability and Consistency -As we can see from the ***Overview*** section, Azure Cosmos DB is all about scalability and availability. We get greeted by a map that shows us, which regions our data gets synchronized to and we can easily add and remove regions by selecting or deselecting them on the map or the ***Replicate data globally section*** in the side menu. +As we can see from the ***Overview*** section, Azure Cosmos DB is all about scalability and availability. We get greeted by a map that shows us which regions our data gets synchronized to, and we can easily add and remove regions by selecting or deselecting them on the map or the ***Replicate data globally section*** in the side menu. -With scaling databases to multiple instances, *Consistency* immediately come to our minds. By default, Cosmos DB uses *Session consistency* but we can choose from five different [Consistency levels](https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levels) in the ***Default Consistency*** menu, if we feel the need to change that. +With scaling databases to multiple instances, *Consistency* immediately comes to our minds. By default, Cosmos DB uses *Session consistency* but we can choose from five different [Consistency levels](https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levels) in the ***Default Consistency*** menu, if we feel the need to change that. -> **Hint:** Even when selecting multiple regions for Azure Cosmos DB, the connection string will always stay the same. That's a very nice feature, which allows your backend to not care about the location of your database at all. Cosmos DB has its own traffic manager that will route your query to the fastest location autimatically. +> **Hint:** Even when selecting multiple regions for Azure Cosmos DB, the connection string will always stay the same. That's a very nice feature, which allows your backend to not care about the location of your database at all. Cosmos DB has its own traffic manager that will route your query to the fastest location automatically. #### 1.2.2 Security Keys -Like every other database, Azure Cosmos DB offers security through access control using Keys. Head over to the ***Keys*** section of the data base to check your keys for different access levels (read-write and read-only) and connection strings. We will need these information later, when we connect the Cosmos DB to the Web API. +Like every other database, Azure Cosmos DB offers security through access control using Keys. Head over to the ***Keys*** section of the database to check your keys for different access levels (read-write and read-only) and connection strings. We will need this information later, when we connect the Cosmos DB to the Web API. #### 1.2.3 Data Explorer -One nice feature of Azure Cosmos DB is the ***Data Explorer*** that can be found in the side menu and offers a live view on the data that sits inside the database. We can also edit and query the documents here. +One nice feature of Azure Cosmos DB is the ***Data Explorer*** that can be found in the side menu, and offers a live view of the data that sits inside the database. We can also edit and query the documents here. At the moment our database is empty, but we will come back later to take a look at what's going on here. @@ -88,7 +88,9 @@ Once we add the connection information to the App Service Settings, the Web API ![Empty List Of Jobs](Assets/EmptyListOfJobs.png) -As we can see, (of course) there are no jobs inside the database at the moment. But we don't get an error message but an empty list. That means, that there is at least "something" inside of our database now. The [`DocumentDBRepositoryBase.cs`](/Backend/Monolithic/Services/DocumentDBRepositoryBase.cs#L97-L138) class creates databases and collections that are not existent automatically when it gets asked for them. + +As we can see, (of course) there are no jobs inside the database at the moment. But we don't get an error message but an empty list. That means that there is at least "something" inside of our database now. The [`DocumentDBRepositoryBase.cs`](/Backend/Monolithic/Services/DocumentDBRepositoryBase.cs#L97-L138) class creates databases and collections that are not existant automatically when it gets asked for them. + Let's check the Cosmos DB's ***Data Explorer*** at the Azure Portal to see what happened! @@ -98,7 +100,7 @@ As we can see, a `contosomaintenance` database has been created with an empty `j #### 1.4.2 Add a new document manually -Time to add our first job manually! Let's click the ***New Document*** button in the `jobs` collection and add a JSON document like the following one in the editor to add a dummy job that points to the Microsoft headquarter in Redmond. +Time to add our first job manually! Let's click the ***New Document*** button in the `jobs` collection and add a JSON document like the following one in the editor to add a dummy job that points to the Microsoft headquarters in Redmond. ```json { @@ -132,7 +134,7 @@ Once we hit ***Save***, we should be able to return to our API and fetch the lis #### 1.4.3 Generate Dummy Data -To have actual data in the Cosmos DB instance to play around with and to avoid having you to write a bunch of dummy Jobs and Parts manually, we have prepared some dummy data for this workshop. Once the Cosmos DB connection is configured, you can call the `api/dummy` endpoint of your Web API to fill the database. +To have actual data in the Cosmos DB instance to play around with and to avoid having you write a bunch of dummy Jobs and Parts manually, we have prepared some dummy data for this workshop. Once the Cosmos DB connection is configured, you can call the `api/dummy` endpoint of your Web API to fill the database. [//]: # (Empty line for spacing)   @@ -143,11 +145,11 @@ Now that we can store documents for *Jobs*, *Parts* and other unstructured data ### 2.1 Create a Storage Account -For that, head over to the [Azure Portal](https://portal.azure.com), click the ***New*** button, open the ***Storage*** category and select ***Storage Account*** to add some cloud storage to store your files at. +For that, head over to the [Azure Portal](https://portal.azure.com), click the ***New*** button, open the ***Storage*** category and select ***Storage Account*** to add some cloud storage to store your files. ![Add a Storage Account in the Azure Portal](Assets/AddStorageAccount.png) -Choose the following settings and hit the Create button to start provisioning the Storage Account. +Choose the following settings and hit the Create button to start provisioning the Storage Account: - **ID:** myawesomestartupstorage - **Deployment model:** Resource manager @@ -160,7 +162,7 @@ Choose the following settings and hit the Create button to start provisioning th ### 2.2 Explore Azure Blob Storage -After a few seconds, Azure provisioned a Storage Account for us and we can navigate to it in the Azure Portal. +After a few seconds, Azure has provisioned a Storage Account for us and we can navigate to it in the Azure Portal. ![Add a Storage Account in the Azure Portal](Assets/StorageAccountOverview.png) @@ -170,7 +172,7 @@ Besides Blob Storage, an Azure Storage Account bundles all kinds of storages lik #### 2.2.2 Security Keys -Similar to what we saw with Cosmos DB, Azure Storage is also secured with Access Keys to manage control. We will need also these information later, when we connect the Storage Account to the Web API the same way we did with Cosmos DB before. +Similar to what we saw with Cosmos DB, Azure Storage is also secured with Access Keys to manage control. We will also need this information later, when we connect the Storage Account to the Web API the same way we did with Cosmos DB before. #### 2.2.3 Configuration @@ -180,7 +182,7 @@ We can upgrade and configure our Storage Account to use Solid State Disks (Premi #### 2.3.1 Create Blob containers for photos -Before we connect the dots between the Web API backend and the Storage Account, we should create **Containers** for storing the uploaded photos at. Navigate to the ***Browse blobs*** section in the menu on the left and create a new container using the ***Add Container*** button. +Before we connect the dots between the Web API backend and the Storage Account, we should create **Containers** for storing the uploaded photos. Navigate to the ***Browse blobs*** section in the menu on the left and create a new container using the ***Add Container*** button. ![Add a Blob Storage Container](Assets/AddBlobContainer.png) @@ -189,7 +191,7 @@ Let's create a container for the uploaded images in their original size with ano - **Name:** images-large - **Public access level:** Blob (anonymous read access for blobs only) -The `images-large` containter will be used by the backend to upload all pictures that have been taken with the device camera to. Later in this workshop, we will down-scale these images automatically for performance enhancements at it is not a best practice to always download full-size images. +The `images-large` containter will be used by the backend to upload all pictures that have been taken with the device camera. Later in this workshop, we will down-scale these images automatically for performance enhancements as it is not a best practice to always download full-size images. So let's also create two more containers for scaled images with the same properties, so that we end up with three containers. @@ -201,7 +203,7 @@ So let's also create two more containers for scaled images with the same propert #### 2.3.2 Add Storage Queue -Now that we have added Containers for uploaded photos, we use another Storage Type of Azure Storage Accounts: Storage Queues. Those are simple message queues that can handle any kind of information and saves them until they got processed. Although we do not need the Storage Queue for the image upload directly, it will become important later at this workshop and it is a good time to create it now. +Now that we have added Containers for uploaded photos, we use another Storage Type of Azure Storage Accounts: Storage Queues. Those are simple message queues that can handle any kind of information and saves them until they got processed. Although we do not need the Storage Queue for the image upload directly, it will become important later in this workshop and it is a good time to create it now. ![Add Storage Queue](Assets/AddStorageQueue.png) @@ -237,7 +239,7 @@ Let's test if everything works as expected and send our first photo to the Web A #### 2.4.1 Uploading a photo -The API endpoint for the photo upload is `/api/photo/{jobId}` and we can basically upload any file we want. You can choose a picture from the web or your computer or use the [Demo-AirplaneAssembly.jpg](Assets/Demo-AirplaneAssembly.jpg) ([Source](https://en.wikipedia.org/wiki/Airplane)) from this repository. Make sure to send the picture as **form-data** file to the API as it expects it in the [`PhotoController.cs`](/Backend/Monolithic/Controllers/PhotoController.cs#L30). +The API endpoint for the photo upload is `/api/photo/{jobId}` and we can basically upload any file we want. You can choose a picture from the web or your computer or use the [Demo-AirplaneAssembly.jpg](Assets/Demo-AirplaneAssembly.jpg) ([Source](https://en.wikipedia.org/wiki/Airplane)) from this repository. Make sure to send the picture as a **form-data** file to the API as it expects it in the [`PhotoController.cs`](/Backend/Monolithic/Controllers/PhotoController.cs#L30). Take the `id` from any job in your Cosmos DB to build the url and attach the photo to a specific *Job*. diff --git a/Walkthrough Guide/05 Search/README.md b/Walkthrough Guide/05 Search/README.md index f8c85e39..8454a4b2 100644 --- a/Walkthrough Guide/05 Search/README.md +++ b/Walkthrough Guide/05 Search/README.md @@ -15,12 +15,12 @@ Select Azure Search and click 'Create'. ![Azure Search Configure](Assets/ConfigureSearchService.png) -You'll have a few options for pricing, but for this demo, we should have plenty of capacity left over if we use the Free tier. Once you've deployed Azure Search, go to the resource +You'll have a few options for pricing, but for this demo we should have plenty of capacity left over if we use the Free tier. Once you've deployed Azure Search, go to the resource ![Azure Search Overview](Assets/SearchOverview.png) ### Indexing our data -There are two ways to get data into Azure Search. The easiest is to make use of the automatic indexers. With the indexers, we're able to point Azure Search to our database and have it on a schedule look for new data. This can lead to situations where the database and search index are out-of-sync so be wary of using this approach in production. Instead, you should manage the search index manually using the lovely SDKs provided. +There are two ways to get data into Azure Search. The easiest is to make use of the automatic indexers. With the indexers, we're able to point Azure Search to our database and have it follow a schedule to look for new data. This can lead to situations where the database and search index are out-of-sync so be wary of using this approach in production. Instead, you should manage the search index manually using the lovely SDKs provided. For ease of use, we'll make use of the Indexers to get some data quickly into our index. @@ -40,7 +40,7 @@ Once you've selected your Cosmos DB account, you should be able to use the drop- **Important Note** The Index name must be set to "job-index", because it is referred to by name in the mobile application. -We need to configure what data we wish to send back down to the device with a search query as well as which properties we'll use to search. The Index is difficult to modify (apart from adding new fields) after we've created it, so its always worth double checking the values. +We need to configure what data we wish to send back down to the device with a search query, as well as which properties we'll use to search. The Index is difficult to modify (apart from adding new fields) after we've created it, so its always worth double checking the values. **Important** You need to create a _suggester_ called 'suggestions'. This is referred to by the _search_ API which we're writing. To do this, tick the 'suggester' box and enter 'suggestions' as its name. Then you also need to mark at least one field as being part of the suggester. We suggest(!) that the _Name_ and _Details_ fields are marked as such. @@ -50,7 +50,7 @@ Note that the screenshot above is slightly out of date, and the _Suggester_ is n Once you've completed this setup, click "Create". ![Azure Search Create Updates](Assets/IndexerSchedule.png) -You can now set the frequenancy at which Azure Search will look for new data. I recommend for this demo setting it to be 5 minutes. We can do this by selecting "custom". +You can now set the frequency at which Azure Search will look for new data. I recommend for this demo setting it to be 5 minutes. We can do this by selecting "custom". ![Azure Search Customer Timer](Assets/CustomTimer.png) We also want to track deletions, so go ahead and check the tickbox and select the 'isDelete' item from the drop-down menu and set the marker value to "true". diff --git a/Walkthrough Guide/06 Functions and Cognitive Services/README.md b/Walkthrough Guide/06 Functions and Cognitive Services/README.md index 9d3e37f0..62782bd6 100755 --- a/Walkthrough Guide/06 Functions and Cognitive Services/README.md +++ b/Walkthrough Guide/06 Functions and Cognitive Services/README.md @@ -2,19 +2,19 @@ # Smart Image Resizing with Azure Functions and Cognitive Services -We have come to a point where our backend has grown to a pretty solid state so let's do some of the more advanced stuff and add some intelligence to it! Not every developer has a background in Machine Learning and Artificial Intelligence to we should start with something simple: **Resizing uploaded images in an intelligent way**. +We have come to a point where our backend has grown to a pretty solid state so let's do some of the more advanced stuff and add some intelligence to it! Not every developer has a background in Machine Learning and Artificial Intelligence so we should start with something simple; **Resizing uploaded images in an intelligent way**. -You remember, users can add photos to *Jobs* and upload them through the Web API sothat they get stored in the Blob Storage. These photos are uploaded and stored in **full size**, which results in high network traffic and download times when the Mobile App is fetching them. Sometimes the App just needs a small or preview version of the photo, so it would be nice to store some smaller sizes of the photos in addition to the orginnally uploaded ones. +You may remember users can add photos to *Jobs* and upload them through the Web API so that they get stored in the Blob Storage. These photos are uploaded and stored in **full size** which results in high network traffic and download times when the Mobile App is fetching them. Sometimes the App just needs a small or preview version of the photo, so it would be nice to store some smaller sizes of the photos in addition to the orginally uploaded ones. -The problem with simple resizing of the images to a certain square resolution like 150 x 150 pixels for thumbnail icons could cut off important parts of a picture that got taken in portrait- or landscape format. This is why it is recommended to use AI to understand what is shown on a picture and crop it accordingly. +The problem with simple resizing of the images to a certain square resolution like 150 x 150 pixels for thumbnail icons, is it could cut off important parts of a picture that got taken in portrait or landscape format. This is why it is recommended to use AI to understand what is shown on a picture and crop it accordingly. ## 1. Microsoft Cognitive Services -Great resources of Intelligence Services for developers without deeper Machine Learning knowledge are [Microsoft's Cognitive Services](https://azure.microsoft.com/en-us/services/cognitive-services/). These are a set of pre-trained Machine Learning APIs across various sections like Vision, Speech or Knowledge that developer's can simply include within their applications using a REST API. +A great resource for Intelligence Services for developers without deeper Machine Learning knowledge is [Microsoft's Cognitive Services](https://azure.microsoft.com/en-us/services/cognitive-services/). These are a set of pre-trained Machine Learning APIs across various sections like Vision, Speech or Knowledge that developers can simply include within their applications using a REST API. ### 1.1 Computer Vision for thumbnail generation -One of these APIs is [Computer Vision](https://azure.microsoft.com/en-us/services/cognitive-services/computer-vision/), a service that tries to understand what's on a picture or video. This service can analyze pictures to generate tags and captions, detect adult or racy content, read text in images, recognizes celebrities and landmarks, detects faces and emotions and much more. You should definitely take some time to explore and play around with all these services! +One of these API's is [Computer Vision](https://azure.microsoft.com/en-us/services/cognitive-services/computer-vision/), a service that tries to understand what's on a picture or video. This service can analyze pictures to generate tags and captions, detect adult or racy content, read text in images, recognize celebrities and landmarks, detects faces and emotions and much more. You should definitely take some time to explore and play around with all these services! ![Cognitive Services Thumbnail Preview](Assets/CogServicesThumbnailPreview.png) @@ -28,7 +28,7 @@ To add Computer Vision to our solution, enter the [Azure Portal](https://portal. ![Add Computer Vision to Azure](Assets/AddComputerVision.png) -Choose the following settings and hit the ***Create*** button to start. +Choose the following settings and hit the ***Create*** button to start: - **ID:** myawesomenewstartupcognitivevision - **Location:** Same as your Web App(or close as Cognitive Services are not available in all Regions) @@ -39,21 +39,21 @@ Once the deployment is succeeded, you can navigate to the resource and access th ## 2. Azure Functions -Functions are a **Serverless** component of Microsoft Azure and abstract even more of the underlying hardware that Platform-as-a-Service (PaaS) offerings like App Service does. An Azure Functions basically just persists of a code snipped and some meta information when and how it should get executed. This code snipped sleeps until it got triggered by an event or other service, wakes up then, executes its code and falls asleep again. +Functions are a **Serverless** component of Microsoft Azure and abstract away even more of the underlying hardware that Platform-as-a-Service (PaaS) offerings like App Service does. An Azure Function basically just consists of a code snippet and some meta information on when and how it should get executed. This code snippet sleeps until it gets triggered by an event or other service, wakes up, then executes its code and falls asleep again. -This behaviour allows Microsoft to offer a [**very attractive pricing model**](https://azure.microsoft.com/en-us/pricing/details/functions/) where you only pay for pure execution time of an Azure Function. That means you only pay and Azure Function when it is actually used. If you write code that never gets executed, it won't cost you anything! The ultimate idea of cloud computing! Event better, [the first 1 million executions or 400000 GB-s are free](https://azure.microsoft.com/en-us/pricing/details/functions/)! +This behaviour allows Microsoft to offer a [**very attractive pricing model**](https://azure.microsoft.com/en-us/pricing/details/functions/) where you only pay for pure execution time of an Azure Function. That means you only pay for an Azure Function when it is actually used. If you write code that never gets executed, it won't cost you anything! The ultimate idea of cloud computing! Even better, [the first 1 million executions or 400000 GB-s are free](https://azure.microsoft.com/en-us/pricing/details/functions/)! > **Hint:** Azure Functions are the ideal service to extend existing large backend architectures with additional functionality or to process data in the cloud. The latter is exactly what we need to do when resizing images. -Whenever a user uploads an image, he should get immediate feedback and should not have to wait for the Cognitive Services. Once the image gets dropped to the Blob Storage, the Function awakes and calls the Cognitive Service API to resize it in a smart way in the background. Next time a user fetches images, he will receive the resized versions. +Whenever a user uploads an image, they should get immediate feedback and should not have to wait for the Cognitive Services. Once the image gets dropped to the Blob Storage, the Function awakes and calls the Cognitive Service API to resize it in a smart way in the background. Next time a user fetches images, they will receive the resized versions. -We have already prepared an Azure Function so we don't need to start from scratch! In the repository, there is an Azure Function called [`ResizeImage.cs`](/Backend/Functions/ResizeImage.cs) that contains the code for our scenario. +We have already prepared an Azure Function so we don't need to start from scratch! In the repository, there is an Azure Function called [`ResizeImage.cs`](/Backend/Functions/ResizeImage.cs) that contains the code for our scenario: 1. Get triggered by a Storage Queue message 1. Take an image from Azure Blob Storage -1. Upload it to the Cognitive Services Computer Vision API -1. Write the resized images back to Azure Blob Storage -1. Update the Cosmos DB entry +3. Upload it to the Cognitive Services Computer Vision API +4. Write the resized images back to Azure Blob Storage +5. Update the CosmosDB entry ### 2.1 Create an Azure Function @@ -61,7 +61,7 @@ Multiple Azure Functions are hosted in a *Function App*. To create one, click th ![Add Azure Functions](Assets/AddAzureFunctions.png) -Add a *Function App* to your solution using the following properties. +Add a *Function App* to your solution using the following properties: - **App name:** myawesomenewstartupfunctions - **Resouce Group:** Use existing @@ -76,11 +76,11 @@ Click the ***Create*** button and wait until Azure provisioned your Function App #### 2.1.1 Explore Function Apps -Once the Function App has been created, we can navigate to it and start exploring the Dashboard. There is not much to see, as we have not any Functions and the moment and the Function App just acts as a container for those. +Once the Function App has been created, we can navigate to it and start exploring the Dashboard. There is not much to see, as we don't have any Functions at the moment and the Function App just acts as a container for those. ![Explore Azure Functions](Assets/ExploreAzureFunctions.png) -There are multiple ways to add Azure Functions. One is to click the small ***+*** button next to the ***Functions*** entry in the side menu and start from scratch. You can see, that Azure Functions are suitable for different scenarios like Webhooks, Timed executions or Data processing. This basically defines when Functions should be triggered. Azure also supports different programming languages. +There are multiple ways to add Azure Functions. One is to click the small ***+*** button next to the ***Functions*** entry in the side menu and start from scratch. You can see that Azure Functions are suitable for different scenarios like Webhooks, Timed executions or Data processing. This basically defines when Functions should be triggered. Azure also supports different programming languages. #### 2.1.2 Tooling @@ -107,7 +107,7 @@ It listens on a Storage Queue called `processphotos` and wakes up once a new mes #### 2.1.4 Inputs and Outputs -When an Azure Function awakes, it can fetch additional **Inputs** from multiple sources that are needed for the processing. Similar to the Triggers, these Inputs also use [Bindings](https://docs.microsoft.com/en-us/azure/azure-functions/functions-triggers-bindings). Beside the Queue message itself that wakes our Function up, it needs two additional inputs: The uploaded photo from the Blob Storage and the *Job* document from Cosmos DB. These are also defined in the function's code. +When an Azure Function wakes, it can fetch additional **Inputs** from multiple sources that are needed for the processing. Similar to the Triggers, these Inputs also use [Bindings](https://docs.microsoft.com/en-us/azure/azure-functions/functions-triggers-bindings). Beside the Queue message itself that wakes our Function up, it needs two additional inputs: The uploaded photo from the Blob Storage and the *Job* document from Cosmos DB. These are also defined in the function's code. ```csharp // Inputs @@ -117,7 +117,7 @@ When an Azure Function awakes, it can fetch additional **Inputs** from multiple [View in project](/Backend/Functions/ResizeImage.cs#L25-L26) -This passes a `Job job` based with its `id` set to `{jobId}` and a `byte[] imageLarge` from `/images-large/{photoId}.jpg` to the Function. The values `{jobId}` and `{photoId}` are from our Trigger the `PhotoProcess queueItem`. +This passes a `Job job` based with its `id` set to `{jobId}` and a `byte[] imageLarge` from `/images-large/{photoId}.jpg` to the Function. The values `{jobId}` and `{photoId}` are from our Trigger, the `PhotoProcess queueItem`. Azure Function Outputs follow the same process. As we want to write two images to our Blob Storage (a medium sized and icon sized one), we define two outputs of the same Binding type. @@ -133,7 +133,7 @@ Both `Stream` objects get passed to the Function. The rest of the code just conn ### 2.3 Integrate with Storage, Cosmos DB and Cognitive Services -Of course, all these Trigger, Input and Output [Bindings](https://docs.microsoft.com/en-us/azure/azure-functions/functions-triggers-bindings) have to be configured. As we might be already used to from the App Service, this configuration is done via Environment Variables. Each Azure Function has a `local.settings.json` file that sets Connection Strings to the used services. +Of course, all these Triggers, Inputs and Outputs [Bindings](https://docs.microsoft.com/en-us/azure/azure-functions/functions-triggers-bindings) have to be configured. As we might be already used to from the App Service, this configuration is done via Environment Variables. Each Azure Function has a `local.settings.json` file that sets Connection Strings to the used services. ```json { @@ -150,11 +150,11 @@ Of course, all these Trigger, Input and Output [Bindings](https://docs.microsoft [View in project](/Backend/Functions/local.settings.json) -For local tests, the Environment Variables can be set in this file, when uploading the Function to Azure, we should save them in the Function App's Application Settings. Navigate to the ***Function App*** in the [Azure Portal](https://portal.azure.com), open the ***Application Settings*** and add the Keyes. +For local tests, the Environment Variables can be set in this file, when uploading the Function to Azure, we should save them in the Function App's Application Settings. Navigate to the ***Function App*** in the [Azure Portal](https://portal.azure.com), open the ***Application Settings*** and add the Keys. ![Set Function Application Settings](Assets/SetFunctionApplicationSettings.png) -Add the settings like the following - getting the values from the relevant sections of your previously created Azure resources. +Add the settings like the following - getting the values from the relevant sections of your previously created Azure resources: - **AzureWebJobsDashboard:** *Key 1 Connection String* from the Storage Account ***Access keys*** section (should be already set) - **AzureWebJobsStorage:** *Key 1 Connection String* from the Storage Account ***Access keys*** section (should be already set) @@ -166,7 +166,7 @@ Scroll up and click ***Save*** to set the Environment Variables for the Function ### 2.6 Deploy to Azure -Similar to the ASP.NET Core Web API project, we also need to compile the Azure Function code into an executable, before we can upload it to the cloud. For this, right-click the `Functions` folder in Visual Studio Code and select ***Open in Terminal / Command Line***. +Similar to the ASP.NET Core Web API project, we also need to compile the Azure Function code into an executable, before we can upload it to the Cloud. For this, right-click the `Functions` folder in Visual Studio Code and select ***Open in Terminal / Command Line***. The Terminal window in Visual Studio Code pops up and we can enter the command to compile the application. @@ -174,7 +174,7 @@ The Terminal window in Visual Studio Code pops up and we can enter the command t dotnet build ``` -The output should look like this and we should see the **Build succeeded** message. +The output should look like this and we should see the **Build succeeded** message: ![Build an Azure Function in Visual Studio Code](Assets/VSCodeAzureFunctionBuild.png) diff --git a/Walkthrough Guide/07 API Management/README.md b/Walkthrough Guide/07 API Management/README.md index 593ebe40..092bc2db 100644 --- a/Walkthrough Guide/07 API Management/README.md +++ b/Walkthrough Guide/07 API Management/README.md @@ -1,12 +1,12 @@ # API Management -Azure API Management is a turnkey solution for publishing APIs for external and internal consumption. It allows for the quick creation of consistent and modern API gateways for existing or new backend services hosted anywhere, enabling security and protection of the APIs from abuse and overuse. We like to think of API Management as businesses digital transformation hub as it empowers organisations to scale developer onboarding as well as monitoring the health of services. +Azure API Management is a turnkey solution for publishing APIs for external and internal consumption. It allows for the quick creation of consistent and modern API gateways for existing or new backend services hosted anywhere, enabling security and protection of the APIs from abuse and overuse. We like to think of API Management as a business digital transformation hub as it empowers organisations to scale developer onboarding as well as monitoring the health of services. ![Highlighted Architecture Diagram](Assets/HighlightedArchitecture.png) #### Why API Management -We'll be using API Management in today's workshop to act as both a gateway our Azure Resources and as a source of documentation about what features we've made available to consumers of our services. +We'll be using API Management in today's workshop to act as both a gateway to our Azure Resources and as a source of documentation about what features we've made available to consumers of our services: - Package and publish APIs to developers and partners - Onboard developers via self-service portal @@ -25,7 +25,7 @@ We'll be using API Management in today's workshop to act as both a gateway our A You can find a our API Management portal running [here](https://contosomaintenance.portal.azure-api.net/) ### Exploring APIs -You can explore APIs with API Management and even get automatically generated snippets in a variety of languages which demonstrate whats required to interact with the Azure services. +You can explore APIs with API Management and even get automatically generated snippets in a variety of languages which demonstrate what's required to interact with the Azure services. ![Developer Portal showing the Get Job API](Assets/DeveloperPortalApiView.png) @@ -41,7 +41,7 @@ Select the ***API Management*** result. You'll then navigate to the Creation bla ![Search for API Management](Assets/ApiManagementFillInfo.png) -Choose the following settings and hit the Create button to start provisioning the API Management instance. +Choose the following settings and hit the Create button to start provisioning the API Management instance: - **Name:** myawesomeneapi - **Resouce Group:** Use existing @@ -60,11 +60,11 @@ It's worth checking that the service is active after deployment as this can take ## 2. Understanding our usage -We're using API Management as our access layer, routing all HTTP requests to our backend through it. You can see this below in this basic diagram (it's not the entire architecture, but more of a high-level overview). +We're using API Management as our access layer, routing all HTTP requests to our backend through it. You can see this below in this basic diagram (it's not the entire architecture, but more of a high-level overview): ![Search for API Management](Assets/RequestFlow.png) -If we imagine the flow for searching jobs. Our request leaves the phone, hits our API Management, which will route it to the nearest instance of our backend. The backend that takes the request and routes it to the correct controller, which has the implementation for interacting with Azure Search. +If we imagine the flow for searching jobs. Our request leaves the phone, hits our API Management service, which will route it to the nearest instance of our backend. The backend then takes the request and routes it to the correct controller, which has the implementation for interacting with Azure Search. ## 3. Configuring API Management @@ -80,7 +80,7 @@ To kick off, we'll create the Parts API manually, and then for the rest of the A #### 3.1.1 Parts -Parts is one of the easiest APIs to implement within the project as we'll only be requesting an array of parts from our backend. We don't-have any variables within our queries or other elements that could complicate the request. +Parts is one of the easiest APIs to implement within the project as we'll only be requesting an array of parts from our backend. We don't have any variables within our queries or other elements that could complicate the request. ![Search for API Management](Assets/AddAPIPartsFirstStep.png) @@ -90,7 +90,7 @@ Click on the ***Add API*** Button and select ***Blank API***. ![Search for API Management](Assets/AddingPartsAPI.png) -We can then provide a few details about our API. +We can then provide a few details about our API: - **Display Name:** This name is displayed in the Developer portal. - **Name:** Provides a unique name for the API. @@ -112,7 +112,7 @@ We can then click on ***Add Operation***. ![Search for API Management](Assets/CreatePartsGETOperation.png) -By default, operations will be set configured for GET Requests, but we can change this using the drop-down menu. +By default, operations will be configured for GET Requests, but we can change this using the drop-down menu. - **HTTP Verb:** You can choose from one of the predefined HTTP verbs. - **URL:** A URL path for the API. diff --git a/Walkthrough Guide/08 Mobile Overview/README.md b/Walkthrough Guide/08 Mobile Overview/README.md index d3d0b893..60703f58 100644 --- a/Walkthrough Guide/08 Mobile Overview/README.md +++ b/Walkthrough Guide/08 Mobile Overview/README.md @@ -8,30 +8,30 @@ The mobile app currently runs on both iOS and Android devices using Xamarin.Form ### 2.1 Development SDK The apps have been developed with [Xamarin.Forms](https://github.com/xamarin/Xamarin.Forms) targeting .NET Standard 2.0. You should find all your favourite .NET libraries will work with both the backend (also targeting .NET Standard 2.0) and the mobile apps. -Using Xamarin.Forms makes it possible for us to write our app just once using C# and XAML and have it run natively on a variety of platforms. This is achieved as it's an abstraction API built on top of Xamarin's traditional mobile development SDKs. Looking at the architecture below, you can see that with traditional Xamarin we can achieve up to 75% code reuse through sharing the business logic of our app. +Using Xamarin.Forms makes it possible for us to write our app just once using C# and XAML and have it run natively on a variety of platforms. This is achieved because it's an abstraction API built on top of Xamarin's traditional mobile development SDKs. Looking at the architecture below, you can see that with traditional Xamarin we can achieve up to 75% code reuse through sharing the business logic of our app. -Before we jump into Xamarin.Forms in any depth let take a moment to understand the underlying technology and how this works. +Before we jump into Xamarin.Forms in any depth lets take a moment to understand the underlying technology and how this works. ![Xamarin Styles](Assets/XamarinArchitectures.png) #### Traditional Xamarin Traditional Xamarin is a one-to-one mapping of every single API available to Objective-C and Java developers for C# developers to consume. If you're familiar with Platform Invokation, then you'll already be familiar with the core concepts of how Xamarin works. It's this one-to-one mapping that is the platform-specific element of a Xamarin app. It's not possible to share the UI layer from iOS to Android when developing with Traditional Xamarin as you won't find iOS APIs such as UIKit as part of the Android SDK. This means that our user interface is unique for the platform and we can create the amazing user experience our users expect from mobile apps. -Where we can share code is the business logic or 'boring bits' of the application. As a general rule, if you writing code that is using only the Base Class Library (BCL) then you should be a great position to reuse this code as the Share C# Core of your app. If you've got existing .NET libraries that you'd like to analyze, then you should install the [.NET Portability Analyzer](https://marketplace.visualstudio.com/items?itemName=ConnieYau.NETPortabilityAnalyzer). +Where we can share code is the business logic or 'boring bits' of the application. As a general rule, if you are writing code that is using only the Base Class Library (BCL) then you should be in a great position to reuse this code as the Shared C# Core of your app. If you've got existing .NET libraries that you'd like to analyze, then you should install the [.NET Portability Analyzer](https://marketplace.visualstudio.com/items?itemName=ConnieYau.NETPortabilityAnalyzer). Traditional Xamarin apps perform exceptionally well compared to their 'native native' counterparts, with some benchmarks showing a notable performance increase when picking Xamarin over the 'native native' approach. One concern we hear from potential users of Xamarin is taking on a large dependency like the Mono runtime in their app. Its worth understanding that our build process does much to reduce the size of our final binary. When building any Xamarin app for release, we make use of a Linker to remove any unused code, including the Mono Runtime and your code. This significantly reduces the size of the app from Debug to Release. -You should consider Traditional Xamarin when you care about code-reuse but not as much as customisation. It's also a great fit if you've experienced with Objective-C, Swift or Java in a mobile context but wish to leverage an existing .NET code base. +You should consider Traditional Xamarin when you care about code-reuse but not as much about customisation. It's also a great fit if you're experienced with Objective-C, Swift or Java in a mobile context but wish to leverage an existing .NET codebase. #### Xamarin.Forms Xamarin.Forms is an open-source, cross-platform development library for building native apps for iOS, Android, Mac, Windows, Linux and more. By picking Xamarin.Forms, we're able to reuse our previous experience with Silverlight, WPF and UWP development to target a variety of new platforms. It being an abstraction over Traditional Xamarin means that it still produces 100% native apps that using the same build process but we can write our code in a .NET Standard library to be shared across multiple platforms. Xamarin.Forms is a fantastic technology for building mobile apps if you've previous experience with MVVM, WPF or Silverlight. It focuses on code-reuse over customisation, but that doesn't limit us from dropping down into platform specific APIs when we want to add deeper integrations to the underlying platforms. -Xamarin.Forms come with 24 controls out of the box, which map directly to their native type. For example, a Xamarin.Forms Button will create a Widget.Button on Android and UIKit.UIButton on iOS. Forms provide a consistent API across all the platforms it supports. This allows us to ensure that functionality we call on iOS will behave the same on Android. +Xamarin.Forms comes with 24 controls out of the box, which map directly to their native type. For example, a Xamarin.Forms Button will create a Widget.Button on Android and UIKit.UIButton on iOS. Forms provides a consistent API across all the platforms it supports. This allows us to ensure that functionality we call on iOS will behave the same on Android. Forms is a great way to leverage existing C# and .NET knowledge to build apps for platforms you may have historically considered not .NET compatible. diff --git a/Walkthrough Guide/09 Mobile Network Services/README.md b/Walkthrough Guide/09 Mobile Network Services/README.md index 78ab290c..87813010 100644 --- a/Walkthrough Guide/09 Mobile Network Services/README.md +++ b/Walkthrough Guide/09 Mobile Network Services/README.md @@ -1,7 +1,7 @@ ![Banner](Assets/Banner.png) # Mobile App Network Services -Our mobile app connects to our Azure API Management sending HTTP requests to remote services to request resources. The implementation within this demo is very lightweight and designed for use in a POC rather than a production app. If you’d like to see a more resilient approach to building networking services then check out the “resilient networking” branch. Here we’ve implemented a [data caching and a request retry policy](https://github.com/MikeCodesDotNet/Mobile-Cloud-Workshop/blob/b4833120d9ceb70abb8753581f133f3467665edd/Mobile/ContosoFieldService.Core/Services/JobsAPIService.cs#L45), which exponentially delays retry attempts. We’ll cover this in more detail later, but for our standard app, we’re using an MVP approach. +Our mobile app connects to our Azure API Management service, sending HTTP requests to remote services to request resources. The implementation within this demo is very lightweight and designed for use in a POC rather than a production app. If you’d like to see a more resilient approach to building networking services then check out the “resilient networking” branch. Here we’ve implemented a [data caching and a request retry policy](https://github.com/MikeCodesDotNet/Mobile-Cloud-Workshop/blob/b4833120d9ceb70abb8753581f133f3467665edd/Mobile/ContosoFieldService.Core/Services/JobsAPIService.cs#L45), which exponentially delays retry attempts. We’ll cover this in more detail later, but for our standard app, we’re using an MVP approach. We separate out each API from the API management service that we’ll be interacting with. In this case, you’ll see the following directory structure in the [Xamarin.Forms main shared library.](https://github.com/MikeCodesDotNet/Mobile-Cloud-Workshop/tree/master/Mobile/ContosoFieldService.Core) @@ -19,9 +19,9 @@ We separate out each API from the API management service that we’ll be interac Each file contains two classes (we know this is bad practice, but keep with us 😏), where you can easily see how we’ve abstracted away our REST calls using a 3rd party package. ## Refit -Refit is a REST library for .NET developers to easily interact with remote APIs . It make heavy usage of generics and abstractions to minimises the amount of boiler-plate code required to make http requests. +Refit is a REST library for .NET developers to easily interact with remote APIs . It makes heavy usage of generics and abstractions to minimise the amount of boiler-plate code required to make http requests. -It requires us to define our REST API calls as a C# Interface which is then used with a HTTPClient to handle all the requests" +It requires us to define our REST API calls as a C# Interface which is then used with a HTTPClient to handle all the requests. #### Security @@ -67,7 +67,7 @@ Task UpdateJob(string id, [Body] Job job); ``` #### Using the service Interface -Our service implementation is pretty straight forward. We create a class to handle the service implementation. Well stub out the methods to map closely to our interface. +Our service implementation is pretty straight forward. We create a class to handle the service implementation. We'll stub out the methods to map closely to our interface. ```cs @@ -116,7 +116,7 @@ public async Task GetJobByIdAsync(string id) **Resilient** -To build a service layer that is resilient to network outages or poor connectivity, we would want to grab a few extra packages. The first being the Xamarin Connectivity Plugin. This allows us to query what our network connectivity looks like before we decide how to process a request for data. We may want to return a cached copy if its still valid and we’ve poor connectivity. Alternatively we may want to do a remote fetch and save the response for next time. To help combat against poor connectivity, we also use Polly to handle timeouts and retry logic. You can see in the example below, we will try 5 times before giving up. +To build a service layer that is resilient to network outages or poor connectivity, we will want to grab a few extra packages. The first being the Xamarin Connectivity Plugin. This allows us to query what our network connectivity looks like before we decide how to process a request for data. We may want to return a cached copy if it's still valid and we’ve poor connectivity. Alternatively we may want to do a remote fetch and save the response for next time. To help combat against poor connectivity, we also use Polly to handle timeouts and retry logic. You can see in the example below, we will try 5 times before giving up. ```cs diff --git a/Walkthrough Guide/10 Chatbot/README.md b/Walkthrough Guide/10 Chatbot/README.md index 0e5b869d..49e66b0c 100644 --- a/Walkthrough Guide/10 Chatbot/README.md +++ b/Walkthrough Guide/10 Chatbot/README.md @@ -1,15 +1,15 @@ ![Banner](Assets/Banner.png) -Creating intelligent infused apps is now the norm to stay current and competitive. Microsoft offers a wide variety of AI platforms that can be consumed through any device. +Creating intelligence infused apps is now the norm to stay current and competitive. Microsoft offers a wide variety of AI platforms that can be consumed through any device. -Bots are a fantastic channel to deliver intelligent experience. Contoso Maintenance Bot offers a conversational bot that integrates with Azure Search to retrieve relevant jobs from CosmosDB. The bot uses Microsoft’s Bot Framework with LUIS (Language Understanding Intelligent Service). +Bots are a fantastic channel to deliver intelligent experiences. Contoso Maintenance Bot offers a conversational bot that integrates with Azure Search to retrieve relevant jobs from CosmosDB. The bot uses Microsoft’s Bot Framework with LUIS (Language Understanding Intelligent Service). Creating an intelligent bot for Contoso Maintenance is a simple 4 steps process. First the LUIS model, the bot app, the bot backend and finally the mobile integration. ## 1. LUIS (Language Understanding Intelligent Service) -LUIS enables you to integrate natural language understanding into your chatbot or other application without having to create the complex part of machine learning models. Instead, you get to focus on your own application's logic and let LUIS do the heavy lifting. +LUIS enables you to integrate natural language understanding into your chatbot or other applications without having to create the complex part of machine learning models. Instead, you get to focus on your own application's logic and let LUIS do the heavy lifting. -Starting with the intelligence part of the bot, LUIS, you can start by creating your model at https://www.luis.ai/ (or https://eu.luis.ai/ if you intend to host your bot in European data centers). There you will find a link to sign up along with abundant information to get you started. +Starting with the intelligence part of the bot, LUIS, start by creating your model at https://www.luis.ai/ (or https://eu.luis.ai/ if you intend to host your bot in European data centers). There you will find a link to sign up along with abundant information to get you started. ![LUIS Welcome Page](Assets/LUISWelcome.png) @@ -24,7 +24,7 @@ After creating your app (or opening an existing app) make sure that (Build) tab ![LUIS Build](Assets/Intents.png) -Let’s get out of the ways a few terms that you need to be familiar with in LUIS: +Let’s get out of the way a few terms that you need to be familiar with in LUIS: ***Intents*** @@ -38,11 +38,11 @@ An utterance is a textual input that LUIS will interpret. LUIS first uses exampl You can think of entities like variables in algebra; it will capture and pass relevant information to your client app. In the utterance, "I want to buy a ticket to Seattle", you would want to capture the city name, Seattle, with the entity, like destination_city. Now LUIS will see the utterance as, "I want to buy a ticket to {destination_city}". This information can now be passed on to your client application and used to complete a task. See Entities in LUIS for more detail. -Now let’s start by creating a new intent, in our case “greeting” intent. Next is writing as many Utterance as you need to represent a user greeting: +Now let’s start by creating a new intent, in our case a “greeting” intent. Next is writing as many Utterance as you need to represent a user greeting: ![LUIS Utterance](Assets/GreetingUtterance.png) -Greeting intent is easy in our case; just we want to respond to this by saying “welcome, this is what I can do…” +Greeting intent is easy in our case; we just want to respond to this by saying “welcome, this is what I can do…” You can include a cancel intent to indicate that user does not wish to proceed or to disregard their request (in our case we are not using one). @@ -63,11 +63,11 @@ Entities support multiple types based on its nature. In ContosoMaintenance we us ![Job Type Entity](Assets/TypeEntity.png) ### 1.2 Train and test -After updating the entities or updating any of the utterances, you need to re-train your model which indicated by a red bulb in the train button: +After updating the entities or updating any of the utterances, you need to re-train your model which is indicated by a red bulb on the train button: ![LUIS Model Training](Assets/TrainButton.png) -Click train often after completing a set of changes. Also, you need to do this before trying to test your model. +Click train regularly after completing a set of changes. Also, you need to do this before trying to test your model. You can access the test by clicking on the blue test button: @@ -98,17 +98,17 @@ Once it is provisioned, you can find your keys by navigating to it: Copy your primary key and add it to your LUIS app on the builder website (https://www.luis.ai or https://eu.luis.ai). -> **Hint:** It may take several mins for your new keys to be accessible. Please wait a bit before start using them. +> **Hint:** It may take several minutes for your new keys to be accessible. Please wait a bit before start using them. Although you are ready for prime time, you will probably go back to your model and introduce improvements and adjustments on a regular basis to make sure to continue to present the best value to your bot users. ## 2. Chat Bot -Now we have our brain behind our Bot good to go; it is time to think about the bot itself. So bots in a general terms are automation software. This means they are essentially stupid 😊. What makes a bot smart or not, are the actual services that it automates the communication to and from. +Now we have our brain behind our Bot good to go, it is time to think about the bot itself. So bots in general terms are automation software. This means they are essentially stupid 😊. What makes a bot smart or not, are the actual services that it automates the communication to and from. What you need to have your bot up and running with basic functionality is a bot app and a bot backend. ### 2.1 Azure Bot Service -Azure Bot Service allows you to build, connect, deploy, and manage intelligent bots to naturally interact with your users on a website, app, Cortana, Microsoft Teams, Skype, Slack, Facebook Messenger, and more. Get started quick with a complete bot building environment, all while only paying for what you use. +Azure Bot Service allows you to build, connect, deploy, and manage intelligent bots to naturally interact with your users on a website, app, Cortana, Microsoft Teams, Skype, Slack, Facebook Messenger, and more. Get started quickly with a complete bot building environment, all while only paying for what you use. It also speeds up development by providing an integrated environment that's purpose-built for bot development with the Microsoft Bot Framework connectors and BotBuilder SDKs. Developers can get started in seconds with out-of-the-box templates for scenarios including basic, form, language understanding, question and answer, and proactive bots. @@ -127,7 +127,7 @@ Below is the architecture of the bot components. ### 2.2 Bot Web App Backend Now you have a bot service that is ready for your development input. Bot backend is located here Mobile-Cloud-Workshop/Backend/BotBackend/ in the git repo. Open the solution in Visual Studio 2017 (Community edition will work as well). -> **Hint:** As we develop this project, Bot framework didn’t fully support .NET Core. This meant that we couldn’t develop it on a Mac as we needed the full .NET Framework library to leverage all the features of the Bot Framework. Bot team is working on releasing a full .NET Core support soon. +> **Hint:** As we developed this project, Bot framework didn’t fully support .NET Core. This meant that we couldn’t develop it on a Mac as we needed the full .NET Framework library to leverage all the features of the Bot Framework. The Bot team is working on releasing full .NET Core support soon. After starting working with the solution, the first thing is to update the settings in the web.config section below with your keys: ```xml @@ -147,25 +147,25 @@ After starting working with the solution, the first thing is to update the setti ``` -After updating the keys, you can publish the project to Azure directly from Visual Studio publish options (right-click the project -> Publish). You can connect directly to Azure using your credentials or Import Profile (you can get the publishing provide from the bot web app overview window -> Get publish profile) +After updating the keys, you can publish the project to Azure directly from Visual Studio publish options (right-click the project -> Publish). You can connect directly to Azure using your credentials or Import Profile (you can get the publishing provider from the bot web app overview window -> Get publish profile) You are done! Congratulations! -Now to test the actual bot implementation and code you can open your bot service from Azure and click on the blade says “Test in Web Chat” +Now to test the actual bot implementation and code you can open your bot service from Azure and click on the blade that says “Test in Web Chat” ![Bot Testing](Assets/AzureBotTesting.png) -> **Hint:** As a recommended practice, you should remove all of your secrets from web.conig and put them inside the “App Settings” blade on Azure Web App service. This way you avoid checking in your secrets in source control. +> **Hint:** As a recommended practice, you should remove all of your secrets from web.config and put them inside the “App Settings” blade on Azure Web App service. This way you avoid checking in your secrets in source control. ## 3. Integration with Mobile App -So now after you have built, tested and deployed your bot you can easily integrate in a Mobile App through a simple WebView screen. Just find your Web channel bot URL and included in your app. +So now after you have built, tested and deployed your bot you can easily integrate in a Mobile App through a simple WebView screen. Just find your Web channel bot URL and include it in your app. ![Bot URL](Assets/AzureBotWebUrl.png) You can reach the Web channel configuration page from the "Channels" blade in your Azure Bot Service instance. -> **Hint:** To have more control on the bot interactions and improve user experience, it is recommended to replace the WebView approach with a more solid native experience. This is done through using configuring and using “Direct Channel” on your bot. Direct channel is about using pure APIs to communicate with the bot. Refere back to Bot Framework documentation for more inforamtion +> **Hint:** To have more control on the bot interactions and improve user experience, it is recommended to replace the WebView approach with a more solid native experience. This is done through configuring and using “Direct Channel” on your bot. Direct channel is about using pure APIs to communicate with the bot. Refere back to Bot Framework documentation for more inforamtion --- diff --git a/Walkthrough Guide/11 Authentication/README.md b/Walkthrough Guide/11 Authentication/README.md index 22cf69ec..e0de24d2 100644 --- a/Walkthrough Guide/11 Authentication/README.md +++ b/Walkthrough Guide/11 Authentication/README.md @@ -4,7 +4,7 @@ Adding Authentication to our app and backend is a little outside of the scope of today's workshop due to time constraints. We believe Authentication is an important enough topic that we've opted to include a guide for you to get an understanding of the key concepts required to implement any Identity Provider into your projects. For that, we chose [Azure Active Directory B2C](https://azure.microsoft.com/services/active-directory-b2c/) to manage users and authentication as our service of choice. -> **Hint:** The Mobile App uses the [OAuth 2.0 Implicit Authentication flow](https://oauth.net/2/grant-types/implicit/), which shows the user an Web Browser windows instead of native Textboxes for entering username and password. This adds security as users don't have to trust the app developer to store and hanlde their passwords securely. +> **Hint:** The Mobile App uses the [OAuth 2.0 Implicit Authentication flow](https://oauth.net/2/grant-types/implicit/), which shows the user a Web Browser window instead of native Textboxes for entering username and password. This adds security as users don't have to trust the app developer to store and handle their passwords securely. > > Although Azure ADB2C also supports a [native login with resource owner password credentials flow (ROPC)](https://docs.microsoft.com/en-us/azure/active-directory-b2c/configure-ropc?WT.mc_id=b2c-twitter-masoucou), it is [not recommended from a security perspective](https://www.scottbrady91.com/OAuth/Why-the-Resource-Owner-Password-Credentials-Grant-Type-is-not-Authentication-nor-Suitable-for-Modern-Applications). @@ -29,7 +29,7 @@ Creating a new Azure Active Directory Service is a bit tricky and requires some ### 1.1 Create a new Tenant -Browse to the [Azure Portal](https://portal.azure.com), click the ***Create a new resource*** button, search for *"Azure Active Directory B2C"* and click the ***Create*** button of the regarding blade to start the creation wizard. +Browse to the [Azure Portal](https://portal.azure.com), click the ***Create a new resource*** button, search for *"Azure Active Directory B2C"* and click the ***Create*** button on the relevant blade to start the creation wizard. ![Create a new AADB2C Tenant](Assets/CreateNewAADB2C.png) @@ -45,7 +45,7 @@ Once the Tenant has been created, it needs to be linked to an Azure Subscription ![Link Existing AADB2C Tenant](Assets/LinkExistingAADB2CTenant.png) -Fill in the required information and hit **Create**. +Fill in the required information and hit **Create**: - **Azure ADB2C Tenant:** Your recently created Tenant - **Azure ADB2C Resource name:** *Filled in automatically* @@ -59,13 +59,13 @@ When we navigate to the B2C Tenant that we have just created, we will not see ma ### 2.1 Add a new Sign-up or sign-in policy -Enabling users to log into our Active Directory or to create an Account in there by themselves is a good start. For this, we need a *Policy*. In Active Directory, Policies define how users can log in, which Authentication Providers (like Facebook) they can use and what important information is, that users have to provide. +Enabling users to log into our Active Directory or to create an Account in there by themselves is a good start. For this, we need a *Policy*. In Active Directory, Policies define how users can log in, which Authentication Providers (like Facebook) they can use and what important information is that users have to provide. -To add a new Policy, click on ***Sign-up or sign-in policies*** in the side menu of the Azure AD B2C window and add a new Policy using the ***Add*** button at the top. +To add a new Policy, click on ***Sign-up or sign-in policies*** in the side menu of the Azure ADB2C window and add a new Policy using the ***Add*** button at the top. ![Add Policy](Assets/AddPolicy.png) -When defining a new policy, Azure will ask you for a bunch of attributes so let's inspect them quickly to make the right choices. +When defining a new policy, Azure will ask you for a bunch of attributes so let's inspect them quickly to make the right choices: #### Identity providers @@ -73,11 +73,11 @@ The services, we want to allow users to register at and log into our application #### Sign-up attributes -We already talked about these. Here we can define, which information a user has to provide to us, when he signs up for our application for the first time. +We already talked about these. Here we can define, which information a user has to provide to us, when they sign up for our application for the first time. #### Application claims -This is the information that Active Directory gives back to our application once the user logs in. We definitely want to get his **User's Object ID** but also might want to get his name or address back. +This is the information that Active Directory gives back to our application once the user logs in. We definitely want to get their **User's Object ID** but also might want to get their name or address back. #### Multifactor authentication @@ -89,7 +89,7 @@ As you can see later, the Login UI looks pretty poor by default. Here we can cha ![Configure Policy](Assets/ConfigurePolicy.png) -Create your first policy with the inputs below and confirm your selections with the ***Create*** button. +Create your first policy with the inputs below and confirm your selections with the ***Create*** button: - **Name:** GenericSignUpSignIn - **Identity providers:** Email signup @@ -106,7 +106,7 @@ Now that users can sign-up and log into our Active Directory, we need to registe Select the ***Applications*** menu and click the ***Add*** button from the new blade that appears. -Here you're going to give the Azure AD B2C application a name and specify whether it should contain a Web API and Native client. You want to do both, so we select ***Yes*** on both options which let a bunch of options appear. +Here you're going to give the Azure ADB2C application a name and specify whether it should contain a Web API and Native client. You want to do both, so we select ***Yes*** on both options which cause a bunch of options to appear. ![Add new AD Application](Assets/AddNewAdApp.png) @@ -148,7 +148,7 @@ Click the ***API Access*** menu item and add a new API for our application and s ## 4. Connect the Web Api Backend with Azure Active Directory -Not that the Active Directory is set up, we can connect it to the Backend and introduce it as the Identity Provider of choice. As ASP.Net Core has support for authentication built-in, not much code is needed, to add Active Directory Authentication application-wide. +Now that the Active Directory is set up, we can connect it to the Backend and introduce it as the Identity Provider of choice. As ASP.Net Core has support for authentication built-in, not much code is needed to add Active Directory Authentication application-wide. > **Hint:** Remember, although we use existing libraries in our Backend and Frontend projects, Azure Active Directory B2C is based on open standards such as OpenID Connect and OAuth 2.0 and can be integrated into any framework out there. @@ -195,7 +195,7 @@ As you can see, we use `Configuration` variables one more time to not hard code [View in project](/Backend/Monolithic/appsettings.json#L30-L34) -So let's set these variables to the correct values an head back to our App Service, open the ***Application Settings*** and add these variables here as we did before for CosmosDB and Storage. +So let's set these variables to the correct values. Head back to our App Service, open the ***Application Settings*** and add these variables here as we did before for CosmosDB and Storage: - **`ActiveDirectory:Tenant`:** "{OUR_AD}.onmicrosoft.com" - **`ActiveDirectory:ApplicationId`:** *{ID_OF_THE_REGISTERED_APPLICATION}* @@ -205,7 +205,7 @@ So let's set these variables to the correct values an head back to our App Servi Don't forget to hit ***Save*** after you have entered all the variables. -Some of the API calls to our backend requires, that a user is authenticated to proceed. `DELETE` operations are a good example for that. The code in the [`BaseController.cs`](/Backend/Monolithic/Controllers/BaseController.cs) has an `[Authenticate]` attribute added to the Delete function. This will automatically refuse calls from unauthenticated clients. In a real-word scenario, you would also want to check if the User's ID matches the owner ID of the item that gets deleted to make sure the client has the right permissions. +Some of the API calls to our backend requires that a user is authenticated to proceed. `DELETE` operations are a good example for that. The code in the [`BaseController.cs`](/Backend/Monolithic/Controllers/BaseController.cs) has an `[Authenticate]` attribute added to the Delete function. This will automatically refuse calls from unauthenticated clients. In a real-word scenario, you would also want to check if the User's ID matches the owner ID of the item that gets deleted to make sure the client has the right permissions. ```csharp [Authorize] @@ -232,7 +232,7 @@ This basically means that if we fire a Delete request to the backend, without an Most of the authentication code is already written in the App but let's go through the important parts quickly, to understand how everything is glued together. -Mostly, the whole process of Logging in, Logging out, Refreshing the Access Token in the background, handling the current user and so on lives in the [`AuthenticationService.cs`](/Mobile/ContosoFieldService.Core/Services/AuthenticationService.cs). Check it out, if you need more details on how Authentication is implemented on the client. It uses the [Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client/) NuGet package (or MSAL) to take care of communicating to Azure AD B2C (and caching the tokens in response) for us. This removes a lot of work on our end. +Mostly, the whole process of Logging in, Logging out, Refreshing the Access Token in the background, handling the current user and so on lives in the [`AuthenticationService.cs`](/Mobile/ContosoFieldService.Core/Services/AuthenticationService.cs). Check it out, if you need more details on how Authentication is implemented on the client. It uses the [Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client/) NuGet package (or MSAL) to take care of communicating to Azure ADB2C (and caching the tokens in response) for us. This removes a lot of work on our end. The `AuthenticationService` gets configured with a set of variables in the [`Constants.cs`](/Mobile/ContosoFieldService.Core/Helpers/Constants.cs) file. As you can see, we define the recently created Scope "read_only" here. @@ -344,9 +344,9 @@ A successful login flow would look like this: ### 6.2 Refresh Access Tokens -Access Tokens usually have a short time to live to provide additional security and let potential attackers that stole and Access Token only operate for a small time. +Access Tokens usually have a short time to live to provide additional security and let potential attackers that stole an Access Token only operate for a small time. -To avoid that the user has to login and acquire a new token every 30 minutes, the Access Token can be refreshed silently in the background. Usually, a Rrefresh Token is used for this. The Mobile App uses the ADAL library, which already provides a functionality to refresh the Access Token. Check out the [`AuthenticationService.cs`](/Mobile/ContosoFieldService.Core/Services/AuthenticationService.cs) for implementation details. +To avoid that the user has to login and acquire a new token every 30 minutes, the Access Token can be refreshed silently in the background. Usually, a Refresh Token is used for this. The Mobile App uses the ADAL library, which already provides a functionality to refresh the Access Token. Check out the [`AuthenticationService.cs`](/Mobile/ContosoFieldService.Core/Services/AuthenticationService.cs) for implementation details. The App tries to refresh the Access Token automatically when it receives a `401 Unauthorized` response and only shows the Login UI to the user if the background refresh failed. @@ -354,7 +354,7 @@ Check out the [Mobile Network Services](/Walkthrough%20Guide/09%20Mobile%20Netwo # Additional Resouces -There are several cool things you can do with Azure Active Directory, that will not be part of this workshop. If you want to go further, check out these links. +There are several cool things you can do with Azure Active Directory that will not be part of this workshop. If you want to go further check out these links: - [Add Social Authentication Providers](https://docs.microsoft.com/en-us/azure/active-directory-b2c/active-directory-b2c-setup-fb-app) - [Customize the Login UI](https://docs.microsoft.com/en-us/azure/active-directory-b2c/active-directory-b2c-reference-ui-customization) diff --git a/Walkthrough Guide/12 Anayltics/README.md b/Walkthrough Guide/12 Anayltics/README.md index 141291cd..7dea9de4 100644 --- a/Walkthrough Guide/12 Anayltics/README.md +++ b/Walkthrough Guide/12 Anayltics/README.md @@ -1,14 +1,16 @@ ![Banner](Assets/Banner.png) # App Center -[App Center](https://www.visualstudio.com/app-center/) offers a rich suit of services aimed at mobile devlopers. We're going to use it today to add crash reporting, analytics and push notifications. We also have a CI/CD workshop that'll be running which covers the build and testing elements of mobile development. + +[App Center](https://www.visualstudio.com/app-center/) offers a rich suite of services aimed at mobile devlopers. We're going to use it today to add crash reporting, analytics and push notifications. We also have a CI/CD workshop that'll be running which covers the build and testing elements of mobile development. + ## Crash Reporting App Center Crash Reporting lets us know when our app crashes on any device. ![Crash Reporting Overview](Assets/AppCenterCrashOverview.png) -Crashes are grouped together by similarities like the reason for the crash and where the occur in the app. It is possible to inspect each individual crash report for the last 3 months, after that a stub of 25 crashes will be kept. +Crashes are grouped together by similarities like the reason for the crash and where they occur in the app. It is possible to inspect each individual crash report for the last 3 months, after that a stub of 25 crashes will be kept. ![Crash Report](Assets/AppCenterCrashReport.png) ## Analytics @@ -20,16 +22,16 @@ App Center Analytics will help you understand more about your app users and thei ## Push Use App Center to easily send targeted and personalised push notifications to any mobile platform from any cloud or on-premises backend. -Push notifications is vital for consumer apps and a key compontent in increasing app engagement and usage. For enterprise apps, it can also be used to help communicate up-to-date business information. It is the best app-to-user communication because it is energy-efficient for mobile devices, flexible for the notifications senders, and available while corresponding apps are not active. +Push notifications is vital for consumer apps and a key compontent in increasing app engagement and usage. For enterprise apps, it can also be used to help communicate up-to-date business information. It is the best for app-to-user communication because it is energy-efficient for mobile devices, flexible for the notifications senders, and available while corresponding apps are not active. ### How Push Notifications Works -Push notifications are delivered through platform-specific infrastructures called Platform Notification Systems (PNSes). They offer barebone push functionalities to delivery message to a device with a provided handle, and have no common interface. To send a notification to all customers across iOS and Android, we have to work with APNS (Apple Push Notification Service) and FCM (Firebase Cloud Messaging). +Push notifications are delivered through platform-specific infrastructures called Platform Notification Systems (PNSes). They offer barebone push functionalities to deliver a message to a device with a provided handle, and have no common interface. To send a notification to all customers across iOS and Android, we have to work with APNS (Apple Push Notification Service) and FCM (Firebase Cloud Messaging). At a high level, here is how push works: -1. The client app decides it wants to receive pushes hence contacts the corresponding PNS to retrieve its unique and temporary push handle. The handle type depends on the system (e.g. WNS has URIs while APNS has tokens). -2. The client app stores this handle in the app back-end or provider. -3. To send a push notification, the app back-end contacts the PNS using the handle to target a specific client app. +1. The client app decides it wants to receive pushes hence it contacts the corresponding PNS to retrieve its unique and temporary push handle. The handle type depends on the system (e.g. WNS has URIs while APNS has tokens). +2. The client app stores this handle in the app backend or provider. +3. To send a push notification, the app backend contacts the PNS using the handle to target a specific client app. 4. The PNS forwards the notification to the device specified by the handle. Thankfully for us, the App Center SDK's handle most of this for us. In our app, all we have to do is ensure we start the AppCenter SDK with Push being enabled. It'll handle the rest for us.