AZ-104 Microsoft Azure Administrator

🎓 Don't Forget Your Learning Badge!

Congratulations on completing your study! You can redeem your learning badge here to showcase your achievement.


Manage Azure identities and governance (20–25%)


Manage Microsoft Entra users and groups


Create users and groups

  • Creating users in Microsoft Entra involves adding individual accounts for people who need access to resources, such as employees, contractors, or service accounts.
  • Groups allow you to organize users based on criteria like department, job role, location, or project. Managing users through groups makes it easier to assign permissions and policies at scale.
  • There are default groups, such as ‘All Users’, but you can also create custom groups tailored to your organization’s needs. Groups can be managed manually or through automated rules that add users based on specific attributes.
  • Admins can add users to groups both manually (selecting users one by one or uploading in bulk) and automatically (using rules that apply when users meet certain criteria, such as joining a department or region).
  • Assigning roles and permissions to groups streamlines user management and security. Changes made to group permissions apply instantly to all group members, reducing repetitive tasks.

Example: Imagine an IT admin at a company uses Microsoft Entra to set up a group called ‘Marketing Team’ for all employees in marketing. When a new marketing employee joins, the admin can quickly add them to this group, giving them access to relevant files, apps, and permissions automatically.

Use Case: A growing startup uses Partner Center to onboard new staff. As new hires join the Sales department, the Azure admin creates a ‘Sales’ group and adds these users. The admin assigns custom permissions to the group, giving members secure access to sales tools, data, and dashboards, ensuring consistent permissions and easy management as the team expands.

For more information see these links:


Manage user and group properties

  • User and group properties allow administrators to define who has access to resources in Microsoft Entra (Azure Active Directory). Properties include display name, email, membership, and assigned roles or permissions.
  • Groups simplify management by letting you assign roles and permissions to many users at once, rather than individually. You can create groups, add or remove users, and set permissions for what group members can do.
  • Editing properties, such as changing a user’s role or updating group permissions, helps ensure the right people always have the correct level of access. Common actions include updating email addresses, changing group membership, or adjusting admin rights.
  • The User Management dashboard provides tools such as search, filters, and bulk actions, making it easy to find users or groups, assign roles, and track who made changes and when.
  • Managing permissions through groups allows organizations to follow best practices for security and compliance, reducing errors and protecting sensitive information.

Example: An IT administrator uses the Entra User Management dashboard to create a ‘Finance Team’ group. The admin adds all finance staff to this group and assigns the ‘Approver’ permission, so they can approve financial transactions in connected apps. If a new finance employee joins, the admin simply adds them to the group, automatically giving them the right access.

Use Case: A company needs to control who can access confidential HR documents in Azure. The administrator creates an ‘HR Managers’ group, adds the HR managers as members, and sets custom permissions for accessing the HR document library. When a manager leaves or changes roles, the admin updates group membership, ensuring only current HR managers have access.

For more information see these links:


Manage licenses in Microsoft Entra ID

  • Licenses in Microsoft Entra ID control access to applications and services, such as Microsoft 365 or Office apps. Rather than assigning licenses to each user individually, you can assign licenses to groups, streamlining user management.
  • Using group-based licensing, all members of a selected group automatically inherit the assigned licenses. When a user joins the group, they get the license; when they leave, the license is removed. This helps ensure users always have access to the correct services.
  • Dynamic membership groups allow licenses to be assigned based on user attributes like department or job title. This means licenses can be added or removed automatically as user roles or locations change, making management much easier for IT teams.
  • Administrators should always set the usage location for users before assigning licenses. This ensures that users receive only the services available in their country or region, avoiding licensing errors.
  • It’s possible for users to inherit multiple licenses from different groups, but they only use one license per product. License settings must be managed at the group level, so changes apply to every member of that group.

Example: Imagine a company creates an ‘Office Staff’ group in Microsoft Entra ID and assigns Office 365 licenses to this group. When new employees join the company, they’re added to the ‘Office Staff’ group and automatically get access to Word, Excel, and Outlook without manual license assignment. If an employee switches departments and leaves the group, their Office license is removed instantly.

Use Case: An IT administrator at a bank sets up group-based licensing so all tellers automatically receive access to the bank’s email and secure document tools when they are added to the ‘Teller’ group. Every year, the admin reviews group membership to make sure only current tellers retain their licenses, and new tellers can request access, pending manager approval. This ensures compliance and reduces manual work.

For more information see these links:


Manage external users

  • External users are individuals outside your organization (such as partners, vendors, or consultants) who need access to specific resources, but aren’t part of your internal staff. In Microsoft Entra, these are typically managed as ‘guest users.’
  • Microsoft Entra B2B collaboration makes it easy and secure to invite external users. You send invitations to their email, allowing them to access your resources using their own credentials from their home organization or a social identity.
  • Access for external users can be restricted using roles, policies, and settings. For example, guest users can be limited from seeing certain directory information or blocked from sensitive applications by configuring external collaboration settings and using Conditional Access policies.
  • Proper management of external users helps organizations collaborate with outsiders while minimizing risk. You can govern who has access, review and revoke access periodically, and ensure that only authorized external users see specific data.
  • To onboard external users, you can allow self-service sign-up, publish access packages, or manage invitations in the Azure portal. It’s important to regularly review guest accounts to make sure access remains appropriate and secure.

Example: A company working on an Azure-based project hires freelance developers. Instead of creating full company accounts for them, the IT admin invites them as guest users through Microsoft Entra B2B, granting access only to the required project resources and restricting their permissions compared to regular employees.

Use Case: An IT administrator in a small business uses Microsoft Entra B2B to invite an external marketing agency. The admin assigns guest access so the agency can review analytics in a specific Azure dashboard. The admin configures policies to ensure the agency can’t view internal employee data or access other sensitive resources.

For more information see these links:


Configure self-service password reset (SSPR)

  • Self-service password reset (SSPR) allows users in Microsoft Entra ID (formerly Azure AD) to reset their own passwords without needing IT support. This reduces help desk calls and empowers users to regain access quickly.
  • To configure SSPR, an administrator selects which users are allowed to use SSPR, defines authentication methods (such as email, SMS, or security questions), and ensures those users register their security information. This is done in the Microsoft Entra Admin Center under Users > Password reset.
  • Monitoring SSPR usage is possible through built-in reports, letting administrators check how many users are enabled and have registered for SSPR. This helps maintain security and ensures successful adoption.

Example: A school’s IT department enables SSPR for all students and teachers. When a student forgets their password, they simply go to the password reset portal, verify their identity with a code sent to their mobile phone, and create a new password—without contacting the IT help desk.

Use Case: At a small business new to Azure, the IT admin enables SSPR for all employees to reduce password reset requests. Employees register their phone numbers for verification. When someone forgets their password, they can reset it themselves using the SSPR portal, saving time for both the user and IT staff.

For more information see these links:


Manage access to Azure resources


Manage built-in Azure roles

  • Azure provides a set of built-in roles through Azure Role-Based Access Control (RBAC) to manage access to resources. These roles define the permissions that users, groups, and applications have on specific Azure resources.
  • The most commonly used built-in roles are Owner, Contributor, and Reader. The Owner role gives full access, including the ability to assign other users roles. Contributor allows managing resources without assigning roles, and Reader can only view resources without making changes.
  • You can assign these built-in roles at different scopes—such as management group, subscription, resource group, or resource level—using the Azure portal’s Access control (IAM) page. This lets you precisely control who can do what within your Azure environment.
  • If none of the built-in roles fit your organization’s needs, Azure also allows you to create custom roles. However, beginners often start by using built-in roles to keep access management easy and secure.
  • For increased security and to follow the principle of least privilege, you should only assign users the minimum permissions they need by choosing the most restrictive built-in role that supports their tasks.

Example: Suppose your company is starting to use Azure for hosting virtual machines and storage accounts. You want your IT manager to manage all resources, assign roles, and control access, so you assign them the Owner role at the subscription level. You assign another team member the Contributor role so they can manage VMs and storage but can’t change access permissions. Finally, your finance team gets the Reader role so they can view resource usage for reporting purposes but can’t make changes.

Use Case: As a new Azure administrator for a small IT department, you need to set up access for your team. By assigning the Reader role to your finance staff, Contributor to your support engineers, and Owner to your lead IT manager, you ensure everyone has the right level of access for their job—no more and no less—using Azure’s built-in roles.

For more information see these links:


Assign roles at different scopes

  • Azure lets you assign roles at different scopes, which means you control exactly which resources someone can access. The four main scopes are: management group (broadest), subscription, resource group, and resource (most specific).
  • Roles assigned at a higher (broader) scope automatically grant permissions to all levels underneath. For example, assigning a role at the subscription level also applies that role to all resource groups and resources within that subscription.
  • Assigning roles at a narrower scope (like a specific resource group or resource) limits what users can access, which helps follow the security best practice of ‘least privilege’—only give users what they need to do their job, and no more.
  • To assign a role, first decide what scope is appropriate: do you want someone to manage all Azure resources (broad), or just a particular virtual machine or storage account (narrow)? Then assign the role at that level in the Azure portal.
  • It’s important to regularly review role assignments and scopes to ensure access is still appropriate, especially when team members change roles or projects.

Example: Suppose a company has several teams working in Azure. The IT manager wants ‘Team A’ members to only manage resources relevant to their project. Instead of giving Team A access to the entire subscription, the manager assigns the ‘Contributor’ role just to the specific resource group housing Team A’s resources. This way, Team A can manage only their own resources without risking changes to unrelated projects.

Use Case: A new Azure administrator in an IT department needs to let a junior developer manage virtual machines used for testing, but not access databases or networking settings. The administrator assigns the ‘Virtual Machine Contributor’ role to the developer at the resource group containing only the test VMs. This limits the developer’s access to only those VMs, minimizing risk.

For more information see these links:


Interpret access assignments

  • A role assignment in Azure connects a security principal (like a user or group) to a specific role (such as Reader, Contributor, or Owner) at a defined scope (resource, resource group, subscription, or management group). This gives the principal certain permissions to manage or view Azure resources.
  • The scope of a role assignment determines where the access applies. You can assign roles at different levels—from a single resource (like a virtual machine) to a whole subscription. Assigning the minimal necessary scope helps ensure better security.
  • Role assignments can overlap, and Azure RBAC uses an additive model. This means if a user has multiple roles at the same or different scopes, their effective permissions are the sum of all assigned permissions.
  • Role assignments can be made directly to users or to groups. If a user is part of a group with a role assignment, the user will inherit those permissions. This makes it easier to manage access by grouping users who need similar permissions.
  • To interpret access assignments, check three things: who is assigned (principal), what role is given (set of allowed actions), and where (scope). Reviewing these helps you understand and control who can do what in your environment.

Example: Imagine an IT manager assigns the ‘Reader’ role to all members of the ‘SupportTeam’ group at the resource group level. Now, everyone in SupportTeam can view all resources in that resource group, but they cannot make changes.

Use Case: A new Azure user in a small company is given the ‘Contributor’ role on the company’s development resource group. This lets the user create and manage virtual machines and databases in the development environment without affecting production resources. The manager periodically reviews these assignments to ensure only the right people have the necessary permissions.

For more information see these links:


Manage Azure subscriptions and governance


Implement and manage Azure Policy

  • Azure Policy is a tool that helps organizations enforce specific rules and compliance requirements automatically across their Azure resources. By using policies, you can make sure resources meet your organization’s standards, such as security, cost control, and naming conventions.
  • To implement an Azure Policy, you start by selecting or creating a policy definition. A policy definition is simply a rule—such as only allowing resources in specific regions, or enforcing certain tags. Azure includes many built-in policy definitions you can use immediately.
  • Once you’ve chosen a policy definition, you assign it to a scope: a management group, subscription, or resource group. The assignment enforces the policy automatically for all resources in that scope. You can also exclude certain resources if needed.
  • After a policy is assigned, Azure Policy continuously evaluates resources within the scope to check compliance. Non-compliant resources are flagged for review, and depending on the policy, actions can be taken automatically (like denying a deployment or adding missing tags).
  • By regularly reviewing compliance results in the Azure portal, you can easily see which resources meet the defined policies and take action to correct any issues. This helps maintain governance and control as your Azure environment grows.

Example: A company wants every resource deployed in Azure to have a ‘Department’ tag so they can track costs and ownership. They use the built-in policy ‘Inherit a tag from the resource group if missing’ and assign it to all resource groups. Whenever a new resource is created without the tag, Azure Policy automatically adds it using the value from its parent resource group.

Use Case: An IT team new to Azure sets up policies to prevent accidental resource deployments in costly regions, enforce tagging for better cost tracking, and restrict which virtual machine types can be created. By leveraging built-in Azure Policies, they ensure their cloud environment is well-governed and stays within budget without manual oversight.

For more information see these links:


Configure resource locks

  • Resource locks in Azure are used to prevent accidental changes or deletions of your resources. You can apply locks at the subscription, resource group, or individual resource level.
  • There are two types of resource locks: ‘CanNotDelete’, which prevents resources from being deleted but still allows changes, and ‘ReadOnly’, which prevents any changes, including updates and deletion.
  • You can configure locks using the Azure portal, Azure CLI, PowerShell, ARM templates, or REST API. In the portal, navigate to the resource (or resource group), select ‘Locks’, and add a lock by specifying its type and a descriptive name.
  • Locks applied at a higher level (like a resource group) are inherited by all resources within it. This means if you lock a resource group, every resource inside it is also protected.
  • To remove or change a lock, you must have sufficient permissions (e.g., Owner or User Access Administrator roles). Deleting a lock immediately removes its protections.

Example: Suppose you have a web application running on Azure and want to make sure nobody accidentally deletes the app service. You navigate to the app service in the Azure portal, select ‘Locks’, and add a ‘CanNotDelete’ lock named ‘ProtectApp’. Now, even if someone tries to delete the app, they’ll be prevented by the lock.

Use Case: An IT admin at a company new to Azure sets up multiple virtual machines and storage accounts for a production workload. To ensure these critical resources are not accidentally deleted or changed, the admin applies ‘CanNotDelete’ locks to all production resource groups. This provides a safety net while the team is learning how Azure resources work.

For more information see these links:


Apply and manage tags on resources

  • Tags are key-value pairs (like ‘Environment = Production’) that you attach to Azure resources, resource groups, or subscriptions. They help you organize, search, and manage resources based on attributes like department, project, or environment.
  • You can apply, edit, or delete tags easily using the Azure portal, Azure PowerShell, or Azure CLI. In the portal, select a resource, choose ‘Tags’, and add name-value pairs. For bulk actions, select multiple resources and assign tags to all at once.
  • Tags are helpful for tracking costs, managing permissions, and reporting. For example, tagging resources with ‘CostCenter’ or ‘Project’ enables you to see how much each team or project spends.
  • To add tags, you need specific access, like the ‘Tag Contributor’ role. Tags are stored in plain text, so avoid putting sensitive data into them.
  • You can update or merge tags without losing existing ones. Tools like PowerShell and CLI support these updates with commands like ‘Update-AzTag’ and ‘az tag update –operation Merge’.

Example: Suppose you’re part of an IT team managing several virtual machines (VMs) in Azure for development, testing, and production. To easily find and manage these VMs, you apply a tag ‘Environment’ with values like ‘Dev’, ‘Test’, or ‘Production’. Later, you can filter all VMs by their ‘Environment’ tag to see only the ones used for production.

Use Case: An organization wants to monitor and control cloud spending for different departments. By applying a ‘CostCenter’ tag to each resource (e.g., ‘CostCenter = HR’ or ‘CostCenter = IT’), the finance team can generate reports that break down Azure costs by department. This makes budgeting and cost optimization much easier, especially for those new to Azure.

For more information see these links:


Manage resource groups

    1. Resource groups are containers in Azure that organize and manage related resources, such as virtual machines, databases, and networks, for your solutions. Grouping resources makes it easier to manage them as a collective unit.
    1. You can create, list, update, and delete resource groups using different tools like the Azure portal, Azure PowerShell, Azure CLI, or SDKs for languages such as Python. These actions help organize resources and streamline their management.
    1. Resource groups help apply consistent policies, manage access, monitor usage, and perform lifecycle actions (like deployment and deletion) on all included resources at once.
    1. When creating a resource group, you specify a location for its metadata; this is important for compliance or performance needs. However, actual resources in the group can be spread across different regions.
    1. For beginners, starting with the Azure portal is the simplest way to manage resource groups: you can visually create, view, and delete resource groups without needing command-line knowledge.

Example: Imagine an IT company launching a simple web application. They put the web server, database, and network settings all into one resource group. If the app needs to be updated or removed, they manage all resources together in one step.

Use Case: A new Azure user is tasked with creating a test environment for learning purposes. They use the Azure portal to create a resource group called ‘TestRG’ in ‘Central US’, then deploy a virtual machine and a storage account within the group. This setup lets them manage, monitor, and eventually clean up all test resources easily by deleting the resource group when finished.

For more information see these links:


Manage subscriptions

  • Understanding subscriptions: An Azure subscription is an agreement to use Azure services, where resources like virtual machines and databases are billed. Each organization can have one or multiple subscriptions to help manage resources and costs.
  • Managing subscriptions: You can view, modify, or cancel subscriptions using the Azure portal or Microsoft 365 admin center. Common actions include adjusting license counts, changing payment options, and reviewing bills. Proper permissions, such as Owner or Contributor roles, are required for these changes.
  • Organizing with management groups: You can group multiple Azure subscriptions using Management Groups to apply consistent policies, control access, and monitor compliance across all subscriptions. This simplifies administration for organizations with several subscriptions.
  • Handling SaaS subscriptions: Subscriptions for SaaS applications (like Microsoft 365) are usually managed via per-user licensing. You assign or remove licenses to/from users, allowing precise control over who accesses certain services.
  • Deleting subscriptions: Unused or trial subscriptions can be deleted from the admin center. For self-service sign-up subscriptions, deletion blocks user access and data, and may affect directory settings, so careful management is needed.

Example: A small IT company signs up for Microsoft 365, purchasing a subscription for 20 user licenses. The administrator uses the Microsoft 365 admin center to assign licenses to each employee, adjust license numbers as staff grows, and views monthly billing statements.

Use Case: An IT administrator for a startup uses the Azure portal to monitor their resource usage and cost. As the company grows, they add new subscriptions for development and production environments, then use Management Groups to enforce security policies across all subscriptions for better governance.

For more information see these links:


Manage costs by using alerts, budgets, and Azure Advisor recommendations

  • Set up cost alerts to monitor spending and usage in real time. Azure Cost Management lets you create alerts for budgets, anomalies, and scheduled triggers, helping you catch unexpected spikes and avoid surprises.
  • Create and manage budgets at different scopes (subscription, resource group, or department) to track costs against spending plans. When actual or forecasted spending approaches or exceeds your set limits, Azure sends notifications to relevant stakeholders, allowing early intervention.
  • Use Azure Advisor for personalized cost-saving recommendations. Advisor analyzes your resources and usage, then suggests actions like rightsizing virtual machines, purchasing savings plans, or removing unused resources to optimize and reduce overall expenses.
  • Leverage automated anomaly detection to highlight abnormal or unexpected patterns in your cost data. This enables you to quickly identify and resolve issues, such as sudden overuse of a service or accidental resource deployment, before costs accumulate.
  • Regularly review cost data and take recommended actions, such as resizing resources, implementing policies, or reviewing forecasts, to maintain control over subscription expenses and ensure cost efficiency.

Example: Imagine you set a monthly budget of $1000 for your company’s Azure resources. Midway through the month, Azure Cost Management detects that your spending is trending higher than usual and sends an alert. You check the details and see that a virtual machine was accidentally left running. You shut it down, preventing extra costs from building up.

Use Case: An IT department new to Azure creates cost alerts and budget limits for their development environment. When Azure notifies them of higher than expected spending in a resource group, the team uses Azure Advisor recommendations to identify and remove several idle virtual machines. This saves the organization money and builds good habits for ongoing cloud governance.

For more information see these links:


Configure management groups

  • Management groups in Azure act as containers to help organize and manage multiple subscriptions under a single hierarchy. This makes it easier to apply policies, monitor compliance, and control access at scale.
  • You can create a management group hierarchy to reflect your organization’s structure, such as separating groups by departments, environments (production, test), or compliance needs. All subscriptions within a management group inherit policies and access controls set at the group level.
  • Using management groups, you can assign Azure policies and role-based access controls (RBAC) once, and those settings will be automatically inherited by all child subscriptions and resources. This simplifies governance and reduces repetitive configuration.
  • Management groups are particularly useful as your Azure environment grows. They allow you to group related subscriptions, enforce organization-wide standards, and meet regulatory or operational requirements, such as isolating sensitive workloads.
  • Setting up and configuring management groups can be done using the Azure Portal, Azure CLI, or SDKs (like JavaScript). You can customize display names, parent-child relationships, and manage group membership for flexibility.

Example: A company has three Azure subscriptions: one for HR, one for Finance, and one for IT. By creating management groups for each department, the company can apply department-specific policies (like restricting access to certain regions) and assign roles (like Owner or Contributor) at the management group level, rather than configuring each subscription separately.

Use Case: A small business new to Azure wants to ensure its development and production environments are governed separately for security. They create two management groups: ‘Development’ and ‘Production’, then place relevant subscriptions under each. The business can apply strict security policies to the Production group (for example, restricting resource creation to certain geographic regions), while giving developers more flexibility in the Development group.

For more information see these links:


Implement and manage storage (15–20%)


Configure access to storage


Configure Azure Storage firewalls and virtual networks

  • Azure Storage firewalls and virtual networks let you control which networks and IP addresses can access your storage account. By default, storage is open to all networks, but you can restrict access for better security.
  • You can create network rules to allow only traffic from certain Azure Virtual Network subnets, specific public IP ranges, or trusted Azure resource instances. This means only approved sources can reach your storage account.
  • Setting up these restrictions is done in Azure Portal under the Networking section of your storage account. You choose ‘Selected networks,’ then add allowed virtual networks, subnets, and/or IP ranges, and save your configuration.
  • Trusted service exceptions allow certain Azure services (like Azure Backup or monitoring solutions) to access your storage account, even if other network restrictions are in place. You can enable this feature easily in the portal.
  • For actionable security, always verify and update your firewall rules as your network changes, and remember access requires both correct network rules and storage account authorization permissions.

Example: Imagine you run an IT department and store company files in Azure Blob Storage. You want only your office network (with known IP addresses) and the company’s private Azure Virtual Network to access these files. You configure firewall rules in Azure to allow traffic only from your office’s public IP and your private subnet. Anyone trying to access from elsewhere is blocked automatically.

Use Case: A new Azure admin at an IT company wants to secure customer data stored in their Azure Storage account. By configuring firewalls and virtual network rules, they prevent unauthorized internet access and ensure only corporate and approved partner networks (via allowed subnets and IPs) can read or write data, protecting sensitive information and meeting compliance standards.

For more information see these links:


Create and use shared access signature (SAS) tokens

  • Shared Access Signature (SAS) tokens are secure strings you generate to grant controlled access to Azure storage resources (like containers or files) without sharing your storage account keys. They specify permissions and validity period.
  • You can create SAS tokens using Azure Storage Explorer or the Azure portal. When creating a SAS, you define start/expiry times and specific permissions (such as read, write, or list) for the target resource.
  • A SAS token is essentially appended to the URL of a storage resource. Anyone with the SAS URL can access the resource only with the permissions and during the timeframe you specified. Always keep SAS URLs secure and use HTTPS when sharing them.
  • There are two main ways to specify SAS tokens: with an Account key (no maximum expiry, but best practice is to set an expiration policy) or a User delegation key (secured by Microsoft Entra ID, maximum of 7 days). This lets you choose the right balance between security and flexibility.
  • SAS tokens are ideal for scenarios where you want to delegate specific operations (such as allow a colleague to upload or download files) temporarily, rather than granting full access to your storage account.

Example: Suppose you need to send a large file to a client, but don’t want to expose your full Azure storage account. You create a SAS token with ‘read’ permission for that file, set it to expire in 24 hours, and send the SAS URL to the client. The client clicks the link and downloads the file securely, with no ability to change or delete anything.

Use Case: An IT administrator at a company uploads log files to Azure Blob Storage for troubleshooting. To allow a support engineer outside their organization to access just those files, the admin creates a SAS token with ‘read’ and ‘list’ permissions for the logs folder, valid for 48 hours, and sends the SAS URL. The engineer accesses only the relevant files for the limited time, maintaining overall storage security.

For more information see these links:


Configure stored access policies

  • A stored access policy in Azure Storage allows you to control one or more shared access signatures (SAS) at the server level, letting you set restrictions such as permissions, start time, and expiry time.
  • Configuring a stored access policy means you can change or revoke access without having to reissue or update individual SAS tokens, making management much easier and safer.
  • To set up a stored access policy, you define the policy with a unique name and desired permissions (for example: read, write, delete), and an optional validity period. You then apply it to your Azure Storage resource (like a blob container) and associate SAS tokens with it.
  • Azure supports a maximum of five stored access policies per resource. Modifying or deleting a policy immediately affects every SAS token linked to it, ensuring quick enforcement of new rules.
  • Using stored access policies helps enforce security best practices in IT by allowing automatic expiry of access, easier revocation, and uniform permission management across multiple users or systems.

Example: Suppose a company shares files in an Azure blob container with temporary collaborators. Instead of issuing a separate SAS token for each person with differing access rules, the company creates a stored access policy granting read-only permissions valid for one week. All SAS tokens are based on this policy. If needed, the admin can quickly extend the expiry or revoke everyone’s access via a single change.

Use Case: An IT administrator for a new Azure deployment wants to grant a project team temporary read/write access to a container during a system migration. By creating a stored access policy with a set expiration date and relevant permissions, the admin ensures the team’s access will automatically expire after the migration, reducing manual cleanup and improving security.

For more information see these links:


Manage access keys

  • Access keys are special codes used to authenticate and connect to Azure storage accounts and workspaces. They should be kept secret and shared only with trusted parties.
  • Azure provides two access keys (primary and secondary) for each storage account or workspace. This allows you to change (rotate) keys without disrupting your applications.
  • Regularly rotating (regenerating) access keys helps maintain security by preventing unauthorized access if a key is accidentally exposed. After rotation, you must update any apps or services using the old key to use the new key.
  • You can enable or disable access keys in the Azure portal or via CLI. Disabling access keys will block all requests using keys or connection strings, which can stop unauthorized access if you suspect compromise.
  • Using Azure Key Vault to manage access keys is recommended for added security. Key Vault centralizes key management and helps enforce policies like key expiration and access controls.

Example: Imagine you build a web app that stores user profile images in Azure Blob Storage. To allow your app to upload images, you use an access key in your app’s configuration. If you hire a contractor to help with development, you share the secondary key with them. When the contractor finishes work, you regenerate the secondary key to prevent further access, while your app continues using the primary key without interruption.

Use Case: A small IT company sets up an Azure Quantum workspace for research. As team members change, they share the secondary access key with new researchers. If someone leaves the team or a key is suspected to be leaked, they quickly regenerate the secondary access key and update connection strings for all affected users, keeping the workspace secure while avoiding downtime for their core applications.

For more information see these links:


Configure identity-based access for Azure Files

  • Identity-based access for Azure Files lets you manage permissions using user or group accounts from Microsoft Entra ID (formerly Azure Active Directory) or from on-premises Active Directory Domain Services (AD DS). This helps ensure that only authorized users can access, modify, or manage files stored in Azure.
  • You can assign permissions at different levels: at the entire file share, or more granular directory and file levels. Share-level permissions are typically managed through Azure RBAC, while file or directory permissions use Windows Access Control Lists (ACLs).
  • To enable identity-based access, you first synchronize your existing user identities to Microsoft Entra ID. You then configure the storage account to use this identity source, assign share-level permissions, and set file/directory permissions as needed through Windows ACLs.
  • Using identity-based access is more secure than traditional storage account keys and helps prevent accidental exposure. You should never share storage account keys, but instead use user or group-based permissions.
  • This approach works for SMB (Server Message Block) protocol access, making it familiar for organizations already using Windows file shares, and is supported for both Windows and Linux clients.

Example: An IT administrator for a small business wants employees to access shared documents from their laptops using their company login. By setting up identity-based access for Azure Files, the admin ensures that employees can access only the folders relevant to their department, while sensitive folders are restricted to managers.

Use Case: A new IT professional in a company migrating file servers from on-premises to Azure can configure Azure Files with identity-based access. This allows employees to use their existing logins to access company documents from anywhere, with share and file permissions managed centrally, just like they were used to on the local network.

For more information see these links:


Configure and manage storage accounts


Create and configure storage accounts

    1. Plan and Create a Storage Account: Begin by deciding on crucial details such as your subscription, resource group, account name, and region. These choices determine where your data is stored and how it’s managed. It’s important to choose a naming convention and region that fits your organization’s needs, and remember, storage account names must be globally unique and contain only lowercase letters and numbers.
    1. Choose the Right Performance and Redundancy Options: Select between Standard and Premium performance tiers depending on your workload requirements. For data protection and high availability, select a redundancy option like locally redundant storage (LRS) or geo-redundant storage (GRS). For example, GRS automatically copies your data to a different region for disaster recovery.
    1. Configure Advanced Features and Security: On the Advanced tab, decide if you need features such as hierarchical namespace for Data Lake workloads, SFTP for secure transfers, or NFS for Linux compatibility. Configure security settings like requiring HTTPS for all data transfers and controlling anonymous access to containers for enhanced protection.
    1. Set Data Management Options: Configure the access tier (Hot, Cool, or Archive) based on how frequently you plan to access the data, and enable features like blob versioning or soft delete to protect your data against accidental changes or deletions.
    1. Adjust Networking and Encryption Settings: Customize connectivity by choosing public or private endpoints, and set routing preference for network efficiency and costs. Select encryption settings to ensure all data is encrypted at rest, meeting compliance and security requirements.

Example: Suppose a small IT consulting firm wants to store client project files securely in Azure. They create a new storage account named ‘consultingdocs123’, select Standard performance for a general workload, choose GRS for disaster recovery, require HTTPS for all connections, and enable soft delete to prevent accidental data loss.

Use Case: A new Azure user in IT sets up a storage account to centralize team documentation and project backups. By enabling soft delete and versioning, they protect critical files from accidental deletions or overwrites, making it easy to restore previous versions if mistakes happen.

For more information see these links:


Configure Azure Storage redundancy

  • Azure Storage redundancy refers to how your data is copied to protect it from hardware failures, outages, or disasters. Azure offers several redundancy options that determine where and how many times your data is replicated.
  • There are four main redundancy options: Locally Redundant Storage (LRS), which keeps three copies of your data in a single data center; Zone-Redundant Storage (ZRS), which copies data across three different availability zones in the same region; Geo-Redundant Storage (GRS), which stores data in both a primary region and a distant secondary region; and Geo-Zone-Redundant Storage (GZRS), combining the benefits of ZRS and GRS.
  • You can configure or change the redundancy settings for a storage account using the Azure Portal, PowerShell, or Azure CLI. Most changes do not require application downtime, making it easy to increase your data protection as your needs change.
  • Choosing the right redundancy depends on your business needs: LRS is cost-effective for non-critical data, ZRS offers higher local availability, GRS protects against regional disasters, and GZRS provides the highest level of resilience.
  • For critical workloads, consider using geo-redundancy (GRS or GZRS), so that your applications can continue to access your data from the secondary region if the primary region becomes unavailable.

Example: Suppose a company stores important customer documents in Azure Storage. If they use LRS, all data copies are kept in one data center. If that data center has a fire, all three copies could be lost. By using GRS, the company’s documents are also stored in another region hundreds of miles away, so the documents stay safe and accessible even if the first data center is destroyed.

Use Case: An IT administrator at a government agency is responsible for securely storing citizen records in Azure. To meet compliance and ensure these records are always available, they configure their storage account to use GZRS. This means the records are protected across multiple zones in their primary region and are also replicated to a secondary region, ensuring maximum availability and disaster recovery.

For more information see these links:


Configure object replication

  • Object replication allows you to automatically copy block blobs from one Azure Storage account (the source) to another (the destination), keeping data synchronized across accounts for redundancy or geographic distribution.
  • A replication policy must be set up, specifying the source and destination storage accounts as well as which containers and blobs to copy. You can create rules to selectively replicate blobs based on name prefixes or filter by virtual directories.
  • You can configure object replication directly through the Azure portal, PowerShell, or Azure CLI. The Azure portal makes it easy by automatically setting up both the source and destination accounts if you have access to both.
  • If you only have access to the destination account or need to define many replication rules, you can use a JSON file to export and share the policy definition. This helps when setting up replication across different organizations or tenants.
  • Before you start, ensure both source and destination accounts have blob versioning enabled (on both) and change feed enabled (on the source). These are required for replication to work and can incur extra costs.

Example: Imagine a company wants to back up its marketing images stored in an Azure Storage account located in Europe to another storage account in North America for disaster recovery. By configuring object replication, all new images uploaded to the ‘images’ container in Europe will automatically be copied to the backup container in North America, without manual intervention.

Use Case: An IT administrator new to Azure is tasked with ensuring that daily reports generated and stored in their company’s main Azure Storage account are also available in a backup storage account in another region for business continuity. By configuring object replication between the two accounts, the administrator can automate this process, ensuring that any new or updated report files are readily available in the backup location.

For more information see these links:


Configure storage account encryption

  • Azure Storage encrypts all data at rest by default using Microsoft-managed keys, providing basic protection for your files and data.
  • For greater control, you can configure customer-managed keys through Azure Key Vault or Managed HSM. This lets you use your own encryption keys for storage account data.
  • Encryption scopes allow you to create separate encryption settings within a storage account. This means you can assign different keys to different containers or blobs for flexible security management.
  • You can create or update an encryption scope using Azure CLI commands like ‘az storage account encryption-scope create’ and ‘az storage account encryption-scope update’—specifying the key source (Microsoft.Storage or Microsoft.KeyVault), key URI, and other options.
  • In cross-tenant scenarios, you may manage data in one Azure tenant and keep encryption keys in another, allowing extra separation and control. Azure supports configuring cross-tenant customer-managed keys for both new and existing storage accounts.

Example: Suppose a company stores sensitive customer data in an Azure Storage account. By default, Microsoft manages encryption keys, but for regulatory compliance, the company wants to control its own keys. They configure the storage account to use their own keys stored in Azure Key Vault, ensuring only authorized staff can manage or rotate these keys.

Use Case: An IT administrator new to Azure needs to set up a storage account for confidential project documents. To meet security policies, they use the Azure CLI to create an encryption scope tied to a customer-managed key in Azure Key Vault, following clear steps to specify the key source and URI. This setup gives them direct control over encryption and key rotation.

For more information see these links:


Manage data by using Azure Storage Explorer and AzCopy

  • Azure Storage Explorer is a free, graphical tool that allows you to manage files, blobs, queues, and tables in your Azure storage accounts without any coding. You can easily upload, download, and organize data using a familiar interface, making it ideal for users new to Azure.
  • AzCopy is a lightweight command-line utility designed for fast and reliable data transfers to and from Azure Storage. While it requires using commands, AzCopy supports automation, large file uploads, and resuming interrupted transfers, making it powerful for regular or large-scale data movement.
  • Both Azure Storage Explorer and AzCopy can be used together: Storage Explorer uses AzCopy behind the scenes for data transfers, giving users the choice of graphical or command-line approaches based on their comfort level. For occasional, small transfers, Storage Explorer is easiest; for scheduled or bulk tasks, AzCopy is more efficient.
  • For small datasets or when working with low to moderate network bandwidth, using Azure Storage Explorer is recommended for its ease of use, while AzCopy offers speed and advanced options for power users or those automating tasks.
  • Typical workflows include uploading files from your local desktop to Azure Blob Storage, downloading data for local access, and synchronizing folders between local storage and the cloud, supporting basic backup or migration scenarios for individuals and small teams.

Example: Imagine a new IT technician needs to upload daily log files (less than 100 MB) from their local computer to a cloud storage account for backup. Using Azure Storage Explorer, they can drag and drop the log files into the Azure Blob Storage folder each morning. Alternatively, they can set up a simple AzCopy command to automate this task with a scheduled script.

Use Case: An entry-level IT administrator at a small business is tasked with moving client reports from local servers to Azure Blob Storage for secure backup and easy sharing. They start by using Azure Storage Explorer’s graphical interface to upload files manually, then advance to using AzCopy scripts for regular, automated transfers, minimizing manual work and ensuring data is backed up consistently.

For more information see these links:


Configure Azure Files and Azure Blob Storage


Create and configure a file share in Azure Storage

    1. Creating an Azure File Share: To create a file share in Azure Storage, access your storage account in the Azure portal. Navigate to ‘File shares’ under ‘Data storage’ and select ‘+ File share’ to start the setup. You will need to provide a name (using only lowercase letters, numbers, and hyphens), select the type (such as SMB or NFS), and set the capacity.
    1. Configuring File Share Settings: During creation, configure essential properties like access tier (e.g., hot, cool, or transaction optimized), backup settings, and provisioned storage size. This ensures your file share meets your performance and cost needs while enabling important features like automated backup.
    1. Connecting and Managing Access: After the file share is created, retrieve the connection string from ‘Access keys’ in the storage account’s ‘Security + Networking’ section. Use this connection string in your applications. For secure access, store the string in Azure Key Vault and configure network settings, such as private endpoints, to restrict access to your organization only.
    1. Organizing and Using Files: In the Azure portal, create directories within your file share and upload files to organize and manage your data. You can do this manually via the portal or automate it using Azure CLI or PowerShell commands.
    1. Real-Time Collaboration: Azure file shares can be easily integrated with Windows, Linux, or cloud-based workflows, allowing teams to share files, folders, and collaborate securely from anywhere.

Example: A small IT consulting team wants to share project documentation, scripts, and logs among team members. They create an Azure file share named ‘projectdocs’ in their Azure Storage account, set the provisioned capacity to 50 GiB, and enable backup. Each team member maps the file share as a network drive on their PC, enabling quick and secure access to shared files whether they’re working from the office or remotely.

Use Case: A beginner-level IT administrator at a company migrates local shared folders to Azure Files to enable remote work. After creating the file share in Azure Storage, the administrator configures private endpoints for added security and stores the connection string in Azure Key Vault. Employees can now easily access shared resources securely over the internet using standard file sharing protocols.

For more information see these links:


Create and configure a container in Blob Storage

  • A container in Azure Blob Storage acts like a folder to organize your files (called blobs). Each storage account can have unlimited containers, and each container can store unlimited blobs.
  • You can create a container using the Azure portal by navigating to your Storage account, selecting ‘Containers’ under ‘Data storage,’ and then clicking ‘+ Container’.
  • When creating a container, you must provide a unique name that is all lowercase, may include numbers and dashes, and is between 3 and 63 characters.
  • You need to set an ‘Anonymous access level’ for the container. The most secure option is ‘Private (no anonymous access)’, but you can also allow public read access if needed.
  • After creation, you can upload, download, or manage blobs within the container, making it easy to store and organize files such as images, documents, or backups.

Example: Imagine you work in IT for a small company and need to store client documents in the cloud. You can create a container in Blob Storage called ‘clientdocs’ and upload all client-related files there, organizing your storage securely and efficiently in Azure.

Use Case: A new Azure user at an IT services firm needs to securely store and organize project images for different customers. By creating a separate container for each client in Azure Blob Storage, they ensure that each customer’s files are organized and access can be managed separately, improving security and simplifying file management.

For more information see these links:


Configure storage tiers

  • Storage tiers in Azure, such as Hot, Cool, and Archive for Blob Storage, allow you to optimize costs by storing data based on how frequently it is accessed. Hot tier is used for data that is accessed frequently, Cool is for data accessed less frequently (at least once every 30 days), and Archive is for rarely accessed data and long-term retention.
  • You can assign storage tiers at both the storage account and blob level. This means you can choose a default tier for new data or change the tier of existing blobs as access patterns change. It is easy to move data between tiers using the Azure Portal or automation tools.
  • Automating tiering using built-in lifecycle management policies or third-party solutions like Komprise Intelligent Tiering helps manage large volumes of unstructured data by moving older or less-used files to lower-cost storage automatically. This reduces manual work and helps control storage expenses.
  • When configuring storage tiers in enterprise scenarios, consider data retention, access needs, and regulatory requirements. For example, files from different departments can be grouped and tiered based on specific requirements using policy groups.
  • Data integrity is maintained during tiering operations. Tools like Komprise ensure the moved files remain accurate using checksums (MD5 for NFS, SHA-1 for SMB) and provide activity reports for tracking and auditing.

Example: A company stores backup copies of project files in Azure Blob Storage. The latest backups are set to the Hot tier for quick recovery if needed. After 30 days without access, an automated policy moves those backups to the Cool tier to save money, and eventually to the Archive tier for long-term retention.

Use Case: An IT administrator at a mid-sized business uses Azure’s lifecycle management policies and Komprise Intelligent Tiering to automatically move infrequently used employee records and old project documents to lower-cost Azure storage. This streamlines compliance, reduces storage costs, and ensures high-value files are always available when needed.

For more information see these links:


Configure soft delete for blobs and containers

  • Soft delete for blobs and containers in Azure Storage allows you to recover data that has been deleted accidentally. When enabled, Azure keeps deleted blobs or containers in a recoverable state for a set retention period (from 1 to 365 days).
  • You can restore soft-deleted blobs individually (including their snapshots and versions) using the Azure Portal, CLI, PowerShell, or REST API. Similarly, deleted containers can also be restored if container soft delete is enabled.
  • Soft delete does not prevent permanent deletion from the storage account itself. For full protection, it’s recommended to enable blob versioning and configure resource locks in addition to soft delete.
  • You can change the retention period at any time; however, only new deletions will use the updated retention. After the retention period ends, the data is permanently deleted and cannot be restored.
  • Soft delete helps meet compliance and business continuity needs by giving IT teams time to recover from human errors or unintended deletions in cloud storage.

Example: An IT administrator at a small company accidentally deletes a critical report stored as a blob in an Azure Storage account. Because soft delete is enabled with a 30-day retention period, the administrator simply navigates to the Azure Portal, toggles ‘Show deleted blobs,’ locates the deleted report (shown with a ‘Deleted’ status), and restores it before the retention period expires.

Use Case: A new-to-Azure IT professional configures soft delete for blobs and containers in their company’s Azure Storage account to protect important files and folders. This enables the recovery of marketing assets or customer data even if someone accidentally deletes them, ensuring business operations continue smoothly without data loss.

For more information see these links:


Configure snapshots and soft delete for Azure Files

  • Azure Files supports snapshots, which are read-only, point-in-time copies of your data. You can use snapshots to restore files or folders to a previous state if they are changed or deleted accidentally.
  • Soft delete for Azure Files protects entire file shares from accidental deletion by retaining deleted shares in a recoverable state for a specified retention period (between 1 and 365 days). During this retention, you can undelete and restore the whole share along with its data and snapshots.
  • You can configure and manage both snapshots and soft delete through the Azure Portal, PowerShell, or Azure CLI. Azure Backup can automate snapshot creation and retention, and will also enable soft delete for all file shares when it is set up for any share in the storage account.
  • Setting an appropriate retention period for soft delete ensures optimal data protection while managing storage costs, since soft deleted shares still count toward your storage quota until fully purged.
  • In the event of accidental or malicious deletion, these features help quickly recover data without needing to involve Azure Support, saving time and reducing downtime.

Example: Imagine your IT team accidentally deletes a shared folder containing important project documents from an Azure Files file share. Because soft delete is enabled, the entire file share (and its previous point-in-time snapshots) can be recovered easily within the configured retention period through the Azure Portal, restoring all files as they were before deletion.

Use Case: A small business uses Azure Files to store and share documentation and reports between departments. One day, an employee accidentally deletes their department’s file share. Thanks to soft delete and regular snapshots, the IT admin can quickly restore the file share and its contents from the most recent snapshot, minimizing business impact and avoiding data loss.

For more information see these links:


Configure blob lifecycle management

  • Lifecycle management in Azure Blob Storage lets you automate the moving and deletion of blobs based on rules you set, helping you optimize cost and manage data efficiently.
  • You can create policies to transition blobs to different storage tiers (hot, cool, archive) depending on how often they’re accessed or modified, or delete blobs when they are no longer needed.
  • Policies can be set in the Azure portal by navigating to your storage account, adding custom rules based on blob type, naming patterns, or last access/modification times, and saving them to automate blob management.
  • Enabling access time tracking allows you to set lifecycle rules based on when files were last used, making it easier to move inactive data to cheaper or archive tiers.
  • Best practices include using filters to target specific blobs, applying rules to entire accounts or selected containers, and regularly reviewing policies to ensure they align with evolving business needs.

Example: An IT team uploads daily log files to Azure Blob Storage. They create a rule to automatically move log files older than 30 days to the cool tier, and after 180 days, delete them. This way, they save money on storage costs for infrequently accessed logs, while ensuring old logs do not accumulate indefinitely.

Use Case: A new Azure user in IT sets up a lifecycle management policy for their storage account that automatically transitions backups older than 90 days into the archive tier and deletes previous versions after one year. This reduces costs and makes it easier to comply with data retention policies without manual intervention.

For more information see these links:


Configure blob versioning

  • Blob versioning in Azure automatically saves previous versions of a blob each time it’s modified or deleted. This means you can recover older versions if data is accidentally changed or deleted.
  • You can enable blob versioning directly in the Azure portal, using PowerShell, Azure CLI, or via an Azure Resource Manager template. In the portal, it’s easily accessible under Data Protection settings in your storage account.
  • Managing blob versions lets you quickly restore critical files, track changes over time, and ensure compliance for data retention. You can list, restore, or delete versions using Azure tools.
  • When enabling blob versioning, you can choose to keep all versions or configure rules to automatically delete old versions after a set period, helping balance data recovery needs and storage costs.
  • If you disable blob versioning, no new versions are saved, but existing blob versions remain accessible until you manually delete them. You can still read or delete previous versions after disabling versioning.

Example: Suppose an IT team stores user profile images as blobs in Azure Storage. If someone accidentally overwrites a profile image, blob versioning allows the team to quickly restore the previous image without needing a separate backup.

Use Case: A new Azure administrator in an IT department enables blob versioning on the company’s file storage account to protect against accidental data loss. Later, when a critical configuration file for an application is unintentionally overwritten, the admin uses blob versioning to restore the earlier, correct file version, ensuring no downtime for users.

For more information see these links:


Deploy and manage Azure compute resources (20–25%)


Automate deployment of resources by using Azure Resource Manager (ARM) templates or Bicep files


Interpret an Azure Resource Manager template or a Bicep file

  • ARM templates and Bicep files are tools in Azure for describing and automating the deployment of cloud resources. ARM templates use JSON (JavaScript Object Notation), while Bicep offers a simpler, more readable syntax.
  • These templates are built using declarative syntax, meaning you specify what resources you want (like virtual machines, databases, or IoT hubs) and their settings, rather than coding step-by-step instructions. This approach ensures consistent, repeatable deployments.
  • A standard ARM template or Bicep file includes sections for defining the resources (what to deploy), parameters (values you can change at deployment), variables (used for calculations or to simplify settings), and outputs (information about deployed resources).
  • Bicep files are converted to ARM templates during deployment, so you can use the same Azure tools (such as Azure CLI or Azure Portal) to deploy either format. This means you can pick the format you are most comfortable with.
  • ARM templates and Bicep files can be stored in source control repositories (like GitHub) and shared across teams, supporting collaboration and version control for your infrastructure.

Example: Suppose your company wants to create an Azure IoT Hub to connect and manage IoT devices. Instead of clicking through the Azure Portal each time, you can write a Bicep file that defines the IoT Hub’s specifications—such as its name, region, and pricing tier. Whenever a new IoT Hub is needed (for testing, development, or production), you simply run the same Bicep file, and Azure sets up the resource exactly as described.

Use Case: A small IT team is responsible for supporting multiple development environments for an IoT solution. Using ARM templates or Bicep files, they can quickly spin up, update, or tear down entire environments (including IoT Hubs, databases, and virtual machines) with a single command. This saves time, reduces manual errors, and ensures all environments are consistent.

For more information see these links:


Modify an existing Azure Resource Manager template

  • Understand the ARM Template Structure: Azure Resource Manager (ARM) templates are JSON files that define Azure resources and configurations. Before making changes, familiarize yourself with the template’s sections: parameters, variables, resources, and outputs.
  • Edit Parameters to Increase Flexibility: You can add, modify, or remove parameters to allow users to customize deployments, such as specifying resource names or configuration values at deploy time. For example, adding a parameter for storage account name enables users to choose between creating a new storage account or referencing an existing one.
  • Apply Conditions for Optional Resources: ARM templates support conditional logic, letting you deploy resources only if certain conditions are met. By introducing parameters like ‘newOrExisting’, you can use a condition property to decide whether to create a resource based on user input.
  • Update Resource References: When you switch variables to parameters or add new parameters, make sure to update all references throughout the template, including within resource definitions and property values. This ensures the template uses the user-provided values during deployment.
  • Test and Format Your Modified Template: After making changes, use tools like Visual Studio Code with appropriate extensions to validate and format your ARM template. You can also use Azure CLI’s ‘what-if’ deployment mode to safely preview changes before deploying.

Example: Suppose you want to reuse an ARM template for deploying a virtual machine (VM) in your Azure environment, but want users to decide whether to use a new or existing storage account. You modify the template to add two parameters: ‘storageAccountName’ (accepts the name of the storage account) and ‘newOrExisting’ (lets the user pick ‘new’ or ‘existing’). You then update references and add a condition so that the storage account is only created if ‘newOrExisting’ is set to ‘new’.

Use Case: A beginner in IT, new to Azure, is tasked with setting up a training environment in Azure DevTest Labs. They find a public ARM template for a VM but need to adapt it so their team can choose to reuse a central department storage account or create a new one for each lab. By modifying the template with extra parameters and conditionals, they make the deployment flexible and reusable for different scenarios without editing the template each time.

For more information see these links:


Modify an existing Bicep file

    1. Open the existing Bicep file in Visual Studio Code, ideally with the Bicep extension installed. This extension helps with syntax highlighting, autocompletion, and validation as you edit.
    1. Make the necessary changes to your Bicep file. This could include updating resource properties (like changing the SKU for an Azure Storage Account), adding new resources, or removing resources no longer needed.
    1. Save your changes and use the extension’s validation features to check for errors. You can also use commands like ‘Bicep: Build’ or ‘Bicep: Validate’ in Visual Studio Code’s Command Palette to ensure the file is correct.
    1. Deploy the updated Bicep file directly from Visual Studio Code by using ‘Deploy Bicep File’, or use Azure CLI/PowerShell commands to redeploy the modified template to your resource group.
    1. After deployment, verify in the Azure portal that the changes have taken effect (for example, check if the resource properties reflect your updates or if new resources appear).

Example: Suppose you have a Bicep file that creates an Azure Storage Account with locally-redundant storage (Standard_LRS). If your organization needs geo-redundant storage instead, you simply open the file in Visual Studio Code, change the ‘sku’ property from ‘Standard_LRS’ to ‘Standard_GRS’, save the file, and redeploy. The storage account’s redundancy changes without needing to start from scratch.

Use Case: You’re an IT support specialist new to Azure, tasked with enabling better data durability for your company by upgrading the storage redundancy level. You locate the existing Bicep file in your team’s repository, update the necessary property, and redeploy—demonstrating how small Bicep file modifications can quickly support evolving business requirements.

For more information see these links:


Deploy resources by using an Azure Resource Manager template or a Bicep file

  • Azure Resource Manager (ARM) templates and Bicep files are used to automate deployment of Azure resources using a declarative approach, meaning you describe what you want to deploy instead of writing step-by-step scripts.
  • ARM templates are written in JSON, while Bicep files use a simpler, more readable syntax that is automatically converted into ARM templates for deployment. Both allow you to define resources such as virtual machines, networks, and storage accounts in code.
  • These templates can be reused, versioned, and integrated into source control or CI/CD pipelines, ensuring consistent deployments across multiple environments (like production and testing) and making it easier to update or scale infrastructure.

Example: Suppose an IT team needs to deploy a web application to Azure. Using a Bicep file, they can define all required resources—like an Azure App Service, a database, and a storage account—in code. Whenever they need to deploy the app (to development, staging, or production), they just run the Bicep file, and everything is created exactly as defined.

Use Case: A new-to-Azure IT engineer wants to set up a reliable, repeatable process for creating and managing virtual machines for a company’s internal tools. Using Bicep, the engineer creates a template specifying the VM size, OS, and networking settings. Whenever a new team member needs a VM, the engineer can deploy it with one command, ensuring rapid setup with consistent configurations and security policies applied every time.

For more information see these links:


Export a deployment as an Azure Resource Manager template or convert an Azure Resource Manager template to a Bicep file

  • You can export an Azure Resource Manager (ARM) template or a Bicep file from existing resources in your Azure subscription. In the Azure portal, choose a resource group or specific resources, then use the ‘Export template’ option to download the configuration as either an ARM JSON template or a Bicep file (Bicep export is only available via the portal).
  • Exporting a template or Bicep file captures the current state and configuration of your resources, including manual changes made after deployment. These exported files can be reused for automating future deployments or scaling your environment, but might require modifications or cleanup before reuse.
  • If you have an ARM template in JSON format, you can convert it to a Bicep file using the Azure CLI or PowerShell ‘decompile’ command. This helps you move to the more readable and maintainable Bicep language for deployment automation, making it easier to update and understand your infrastructure as code.

Example: Suppose you create a virtual network and a virtual machine manually in the Azure portal for testing. Later, you want to set up the same resources in multiple environments, like development and production. You export the resource group as a Bicep file using the Azure portal. After downloading, you clean up the file to remove hardcoded settings and add parameters for flexibility. Now you can redeploy your virtual network and VM consistently by running the Bicep file in other environments.

Use Case: A small IT team new to Azure provisions a web application by hand for their organization. When they need to automate regular deployments and enforce standard configurations, they export their existing resources as a Bicep file. With minor edits, the team creates a reusable template that saves time and reduces errors in setting up future web apps.

For more information see these links:


Create and configure virtual machines


Create a virtual machine

  • A virtual machine (VM) is a software-based computer that runs in the cloud, letting you use compute resources without investing in physical hardware. In Azure, you can create VMs on-demand for tasks such as application hosting, development, or testing.
  • To create a VM in Azure, you select an operating system image (Windows or Linux), decide on the hardware specifications (CPU, RAM, storage), and configure settings like administrator credentials and network access. This is typically done through the Azure Portal, a web-based management interface.
  • Once your VM is created, you can connect to it using tools like Remote Desktop Protocol (RDP) for Windows VMs or Secure Shell (SSH) for Linux VMs, install software, and adjust firewall or network settings as needed. You have full control over its configuration, just like a physical machine.
  • Azure offers flexibility with VM scale sets, high availability zones, and VM optimization features. You can choose the VM size and region that best meet your performance and budget requirements, paying only for what you use.
  • Managing VMs includes maintenance tasks such as installing updates, applying security patches, and monitoring usage. Azure provides tools to automate and simplify these tasks for beginners.

Example: Imagine you need a Windows server to test a new software application. Instead of buying hardware, you log in to the Azure Portal, create a Windows Server VM, choose the resources you need, set a username and password, then connect to the VM via Remote Desktop. You can start testing your application right away.

Use Case: A small IT team wants to quickly set up a temporary environment to evaluate an HR management application. By creating a virtual machine in Azure, they can install the application, test its features, and safely discard the VM when testing is done, saving time and cost.

For more information see these links:


Configure Azure Disk Encryption

  • Azure Disk Encryption protects the data stored on your virtual machine’s disks by encrypting the OS and data disks. This ensures that if someone gains access to the underlying storage, they cannot read the data without the proper keys.
  • There are two types of encryption you can enable in Azure: ‘encryption at rest’ using platform-managed or customer-managed keys, and ‘encryption at host’ to provide end-to-end protection from the host all the way to the storage.
  • To configure Azure Disk Encryption through the Azure portal, you first create a Disk Encryption Set using a Key Vault that stores cryptographic keys, then associate this set with your VM’s disks during or after VM creation.
  • Azure supports disk encryption for both Windows and Linux VMs, and allows you to check the encryption status or update the encryption keys from within the Azure portal or with command-line tools.
  • It’s important to backup your VM or take snapshots before enabling encryption, as the encryption process may cause the VM to reboot and requires permissions to the Key Vault for managing encryption keys.

Example: A small business wants to store confidential customer data on their Azure virtual machine. The IT administrator uses the Azure portal to enable disk encryption when setting up the VM. They select ‘Encryption at host’ and choose a customer-managed key from Azure Key Vault, ensuring all data written to the VM’s disks is encrypted automatically.

Use Case: A new IT administrator at a company deploying its first Azure VM uses disk encryption to meet compliance requirements for encrypting customer financial information. By following the step-by-step process in the Azure portal, they ensure that all data on the virtual machine’s disks is encrypted and only accessible by authorized users.

For more information see these links:


Move a virtual machine to another resource group, subscription, or region

  • Moving a virtual machine (VM) between resource groups, subscriptions, or regions in Azure is a way to reorganize, optimize, or scale your cloud resources. You can use the Azure portal, CLI, or PowerShell tools to initiate the move based on what fits your workflow best.
  • When moving a VM to another resource group or subscription, all dependent resources (like disks and network interfaces) must be moved together to avoid issues. Some special cases (like encrypted disks or Marketplace VMs) may require extra steps or are not supported for moving.
  • To move a VM to another region, use the Azure Resource Mover service. This tool helps you select the resources you want to move, validates dependencies, and guides you through each step, ensuring a seamless transfer between regions for scenarios like disaster recovery or compliance.
  • The move process involves selecting the source and destination, choosing the VM and its dependencies, reviewing settings, and confirming the move. Always double-check quotas, supported scenarios, and limitations before beginning to avoid disruptions.
  • Not all VM configurations are supported for moves. For example, scale sets with standard SKU load balancers or VMs tied to specific Marketplace plans may not be movable. Check documentation to confirm your VM’s eligibility and plan accordingly.

Example: Imagine you set up a development VM for a new project in the ‘Dev’ resource group. As the project grows, it moves to production, and you need to transfer the VM to the ‘Production’ resource group in the same subscription. Using the Azure portal, you select the VM and its associated disk, initiate the move, and Azure handles the transfer while keeping all settings intact. This keeps your resources organized and aligns access controls with company policies.

Use Case: An IT administrator at a mid-size company is tasked with moving a VM from the European region to the US region to comply with new data residency requirements. Using Azure Resource Mover, the admin selects the VM and its dependencies, reviews settings, validates quotas, and performs the move—all while keeping service downtime minimal and maintaining compliance.

For more information see these links:


Manage virtual machine sizes

  • Virtual machine sizes in Azure determine the amount of CPU, memory, storage, and network capacity available for your VM. Choosing the right size helps balance performance and cost.
  • You can change the size of an existing VM if your needs change. This is useful, for example, if your web application starts receiving more traffic and needs more resources.
  • To resize a VM in the Azure portal, select your VM, go to the ‘Size’ menu, choose from the list of available sizes, and click ‘Resize’. Note that resizing may require the VM to restart, which can cause temporary downtime.
  • Some sizes may not appear if the VM is running or if specific hardware resources are not available in your region. Stopping (deallocating) the VM can sometimes show more size options.
  • Always review any cost changes before resizing, and be mindful that deallocating a VM releases its dynamic public IP address, so you may need to update any references to that IP elsewhere.

Example: Imagine you launched a basic Azure virtual machine to test a new application. After a few weeks, the application becomes popular, and the server is slowing down. You can easily increase the VM size in the Azure portal to add more CPU and memory, improving performance for your users.

Use Case: An IT administrator at a small business creates a standard-size VM to run payroll software in Azure. As the company grows, the VM slows during payroll processing at the end of each month. The IT admin uses the Azure portal to resize the VM to a more powerful option, minimizing processing time without needing to rebuild a new machine or migrate data.

For more information see these links:


Manage virtual machine disks

  • Virtual machines (VMs) in Azure use disks to store the operating system, applications, and data. Each VM is created with standardized OS and temporary disks, but you can add additional data disks for storing applications and files.
  • You can create, attach, detach, and delete disks using the Azure portal, PowerShell, or CLI. This makes it easy to expand storage as needs change, or remove disks when they are no longer required.
  • There are different types of disks, such as Standard and Premium, each offering different performance and cost profiles. Choosing the right disk type is important for matching your workload’s speed and reliability requirements.
  • It’s important to monitor disk performance and usage. Each VM size in Azure allows a specific number of attached disks and total input/output operations per second (IOPS) and throughput, so plan accordingly.
  • Always detach disks carefully to avoid data loss. Detached disks are still charged as stored data until deleted, so keep track to avoid unnecessary costs.

Example: Imagine you are setting up a virtual machine for a small business website. Initially, you install the operating system on the OS disk. As your website grows and you need more space for images and databases, you can attach a new data disk in the Azure portal, assign it to your VM, and use it to store all new files and data.

Use Case: An IT administrator at a company needs to upgrade their VM to handle increased demand for data processing. They use the Azure portal to attach a larger Premium data disk to their existing VM, and soon after, migrate the company’s applications and databases to this disk, ensuring the system remains fast and scalable without downtime.

For more information see these links:


Deploy virtual machines to availability zones and availability sets

  • Availability sets are groups of virtual machines (VMs) within the same Azure datacenter, distributed across different hardware to reduce the chance of all VMs failing due to a localized issue (like power or networking problems). This setup improves uptime and is ideal for applications that need high availability in regions without Availability Zones.
  • Availability zones offer a higher level of reliability by placing VMs in physically separated locations within an Azure region. Each zone has its own independent power, cooling, and networking, protecting VMs from datacenter-wide failures and enabling recovery if one zone goes down.
  • Deploying VMs in availability sets or zones helps meet Azure’s service-level agreement (SLA) for uptime—at least 99.95% for availability sets and 99.99% for availability zones when VMs are correctly distributed. Using two or more VMs in these configurations ensures your applications remain accessible even during outages.
  • Choosing between availability sets and zones depends on your application’s needs and regional support: use availability sets for cost-effective local fault tolerance and improved VM-to-VM latency, or availability zones for the highest reliability against regional outages.
  • When creating resources, Azure lets you select the deployment option (set or zone) in the portal or CLI; VMs should be assigned to these options during creation as they cannot be moved later. Organize VM roles (web, app, DB tiers) across sets/zones to ensure redundancy for each function.

Example: Imagine you’re building a simple website for your company. To keep it running during network issues or maintenance, you deploy two web server VMs in an availability set. If one VM goes offline for updates or hardware repair, the other continues serving your website—your users experience no downtime.

Use Case: A new IT administrator at a small business is setting up an internal payroll application on Azure. They deploy two VMs for the app across two different availability zones in the same Azure region. This ensures that payroll operations stay available for employees, even if a power failure affects one datacenter.

For more information see these links:


Deploy and configure an Azure Virtual Machine Scale Sets

  • Azure Virtual Machine Scale Sets (VMSS) let you deploy and manage a group of identical, load-balanced VMs. You can easily create VMSS using the Azure portal, CLI, or templates, choosing between Windows or Linux images based on your needs.
  • VMSS automatically scales the number of VM instances up or down based on demand. You can set manual or automatic scaling rules, ensuring your application can handle more (or less) users without manual intervention.
  • You can centrally manage and deploy applications to all VM instances using Azure VM Applications. This means packaging your app once and deploying it quickly and securely to every VM in your scale set, with full control over access and versioning.
  • Configuration tools like PowerShell Desired State Configuration (DSC) allow you to automate the installation and configuration process for each VM in your scale set, keeping all instances consistent and up to date.
  • VMSS integrates with load balancers for high availability, making sure incoming traffic is evenly distributed across all your VM instances, and failed VMs are automatically replaced without downtime.

Example: For a beginner in IT, imagine you run a company website that expects more visitors during sales events. Using VMSS, you deploy five web server VMs that automatically increase to 15 if traffic spikes, ensuring your site stays fast for everyone. All VMs get the latest version of your website through VM Applications, saving manual update effort.

Use Case: An IT team for a small business sets up VMSS to host an internal inventory application. They use the Azure portal to create a scale set of Windows VMs, configure admin accounts, and deploy a line-of-business app using VM Applications. During busy inventory periods, they manually increase the VM count to maintain performance. With PowerShell DSC, the team ensures that every new VM is automatically configured with the correct app version and settings.

For more information see these links:


Provision and manage containers in the Azure portal


Create and manage an Azure container registry

  • Azure Container Registry (ACR) is a private, cloud-based registry that lets you securely store and manage container images and artifacts. With ACR, you can keep your images private, share them only with your team, and use them in your deployments.
  • Creating an ACR is simple in the Azure portal: sign in, search for ‘Container Registries’, click ‘Add’, fill in basic information like registry name, resource group, and location, then review and create. The process takes just a few minutes and allows you to get up and running quickly.
  • You can push container images from your development machine into your registry using standard Docker commands. Once your images are in the registry, they can be pulled by Azure services like Azure Kubernetes Service (AKS), Azure App Service, or by your team for local testing.
  • Access control and security are managed using Azure roles and policies. You can monitor usage, review activity logs, automate builds, and easily clean up old images through the Azure portal.
  • Regular management includes adding or deleting repositories, viewing available images, setting up webhooks for automated tasks, and adjusting permissions as your team changes.

Example: Suppose you are developing a web application and package it as a Docker container on your laptop. By creating an Azure Container Registry, you push (upload) this container image to your own secure storage in Azure. Later, your teammates or Azure services can pull (download) and run this container image directly from the registry.

Use Case: A small IT team is building an internal tool that needs to be updated frequently. By creating their own Azure Container Registry, developers can push new versions of the app as container images. IT manages access so only team members can upload or download images. When ready to deploy, they pull the latest version directly from the registry to their Azure Kubernetes cluster for a fast and reliable release.

For more information see these links:


Provision a container by using Azure Container Instances

  • Azure Container Instances (ACI) allow you to run containers in the cloud without managing virtual machines or complex infrastructure. You simply provide a container image, and Azure runs it for you.
  • Provisioning a container in ACI through the Azure portal involves specifying key details like the container image location, resource group, CPU and memory requirements, ports to expose, and an optional DNS name label to make your application accessible over the internet.
  • ACI supports both Linux and Windows containers, and you can use images from public repositories (like Docker Hub) or private registries (such as Azure Container Registry). This flexibility makes it easy to deploy a wide variety of applications.
  • You can access and monitor your container’s status, logs, and configuration directly through the Azure portal, allowing for easy troubleshooting and management without needing command-line tools.
  • ACI is ideal for running isolated, short-lived, or burst workloads, such as testing new application versions, running scheduled jobs, or deploying microservices quickly.

Example: Suppose you have a simple web application packaged as a Docker container and stored in Azure Container Registry. Using the Azure portal, you create a new Azure Container Instance, point it to your container image, set the required CPU and memory, open port 80, and assign a DNS name. Within moments, your application is running in the cloud and accessible via a public URL—no infrastructure setup required.

Use Case: An IT team new to Azure wants to quickly try out a new internal tool for file processing. They package the tool as a container image and deploy it with Azure Container Instances via the portal to test its performance and accessibility, all without having to learn or configure virtual machines or Kubernetes.

For more information see these links:


Provision a container by using Azure Container Apps

  • Azure Container Apps lets you run containerized applications without managing infrastructure. You simply upload or reference a container image, and Azure handles scaling and networking.
  • Provisioning a container app in Azure usually starts in the Azure portal, where you can select to create a new Container App. You’ll need to choose or create a resource group, name your container app, and select or create a Container Apps environment for isolation and communication purposes.
  • To deploy your app, you point Azure to a container image. This image can be from Azure Container Registry, Docker Hub, or another registry. You enter the registry details and image name, and Azure will pull, run, and expose your app based on your settings.
  • Azure Container Apps can scale automatically based on demand. You set scaling rules (e.g., autoscale up to 10 replicas) to handle fluctuating workloads without worrying about configuring servers.
  • Once provisioned, you can manage your container app from the portal: view logs, change configurations, redeploy updated images, or clean up resources easily.

Example: Imagine you built a simple web API using .NET and packaged it into a Docker image. You log into the Azure portal, create a new Azure Container App, and specify your image from Docker Hub. Azure handles the rest—your API is online and scalable in minutes.

Use Case: A beginner IT professional wants to deploy a proof-of-concept chat bot for internal use. Instead of learning complex cloud infrastructure or Kubernetes, they use Azure Container Apps: upload their bot’s container image, configure settings, and have a live, scalable chat bot accessible to their teammates, all managed through the Azure portal.

For more information see these links:


Manage sizing and scaling for containers, including Azure Container Instances and Azure Container Apps

    1. Azure Container Instances are basic building blocks for running containers in Azure. Sizing is set when you create the instance—you select CPU and memory amounts based on what your app needs, and you manually create or delete instances to scale up or down.
    1. Azure Container Apps allow automatic scaling based on demand and events. You set minimum and maximum numbers of container replicas, and the platform handles scaling up during high traffic and scaling down (even to zero) when idle, reducing costs.
    1. Scaling can be vertical (increasing CPU/memory for a single container) or horizontal (adding more container instances). Azure Container Apps focus on horizontal scaling, making them ideal for web or API workloads. You can adjust scaling rules without knowing infrastructure details.
    1. For beginners, Azure Container Apps provide a simple experience with scaling managed for you, while Container Instances give you manual control. Choose Container Apps for applications that need to automatically respond to changing demand.
    1. Best practices include setting resource limits appropriately, monitoring container performance, and using minimum replica settings in Container Apps to avoid cold starts for critical workloads.

Example: A small e-commerce website hosted on Azure Container Apps automatically scales out to handle more users during holiday sales and scales in to save money overnight when traffic is low. The scaling is managed by Azure—no manual intervention required.

Use Case: An IT team new to Azure needs to deploy a containerized payroll app. By using Azure Container Apps, they set min and max replicas and let the app scale automatically during monthly payroll processing spikes, ensuring performance without extra configuration.

For more information see these links:


Create and configure Azure App Service


Provision an App Service plan

  • An App Service plan in Azure defines the amount of resources (such as CPU, memory, and disk space) that are available for your web app to run. All apps in the same App Service plan share these resources.
  • When creating an App Service plan, you must choose the operating system (Windows or Linux), the region (like East US or West Europe), and a pricing tier. The pricing tier determines the features and scale available to your apps and can be adjusted based on your needs.
  • You can create an App Service plan before or during the creation of your web app. Managing these plans allows you to control costs by selecting appropriate resource levels and scaling as your needs change.
  • Different plans support different numbers and sizes of virtual machine (VM) instances—scaling up (bigger VMs) or out (more VMs) can be done easily through the Azure portal to handle more traffic or improve performance.
  • If you need your app to be highly available, you can enable zone redundancy in supported regions, spreading your resources across multiple data centers.

Example: Suppose you’re building your first company website using Azure. During setup, you’re asked to choose an App Service plan. You select the ‘Basic’ tier in the ‘West US’ region with Windows as the OS, which gives you enough resources to host your site and keep costs low. Later, as your website attracts more visitors, you can upgrade to a higher tier or add more VM instances to handle the increased traffic.

Use Case: A small IT startup is developing multiple internal tools and websites, all in the same Azure region. To save costs and streamline management, they provision a single App Service plan and run those apps under it. As usage grows, they monitor resource usage and easily scale up the plan or split high-traffic apps into separate plans if needed.

For more information see these links:


Configure scaling for an App Service plan

  • Scaling an App Service plan allows you to adjust resources (like CPU, memory, and disk space) to match your app’s demand. You can scale up (vertical scaling) by changing to a higher pricing tier for more resources or scale out (horizontal scaling) by increasing the number of instances that run your app.
  • Scaling up is ideal when your app needs more power or features, such as dedicated VMs, custom domains, certificates, or staging environments. This is done by selecting a new pricing tier in the Azure portal—no code changes or redeployments needed.
  • Scaling out handles more visitors by running multiple copies of your app on different servers (instances). You can do this manually or set up automatic scaling, which increases or decreases the number of instances based on incoming traffic or resource usage.
  • Scaling changes apply to all apps within the same App Service plan. It’s important to monitor usage so you don’t over-provision and waste resources or under-provision and impact performance.
  • Related resources, like databases or storage, are scaled separately. If your app depends on these, ensure they are scaled to handle increased demand as well.

Example: Suppose you have a small company website on Azure App Service that experiences heavy traffic during a big marketing campaign. By scaling out, you can quickly increase the number of instances to keep your website fast and responsive, then scale back down after the campaign to save costs.

Use Case: An IT team new to Azure is deploying their first internal business app. As employee usage grows, they notice slow performance during peak times. By using the Azure portal to scale out the App Service plan to more instances and enabling autoscale, the team maintains good performance automatically, without manual intervention or code changes.

For more information see these links:


Create an App Service

  • Azure App Service is a platform for hosting web applications, REST APIs, and mobile backends, letting you run your code in the cloud without managing underlying infrastructure.
  • To create an App Service, you typically use the Azure Portal, Visual Studio Code, or command-line tools. You’ll need to choose a unique name for your app, select a runtime stack (such as .NET, Node.js, Python, or PHP), and pick an operating system (Linux or Windows) and a pricing tier based on your needs and budget.
  • App Service plans control resources (CPU, memory) and cost. For learning or testing, select the Free (F1) or Basic tiers, which are inexpensive and easy to set up.
  • Creating an App Service also lets you configure deployment settings, such as connecting to source control (like GitHub), enabling automatic deployment, and setting environment variables for your application.
  • Once created, you can monitor your app’s performance, set up custom domains, enable security features like HTTPS, and scale the app up or out with just a few clicks.

Example: Imagine a small business wants to launch a company website using Python Django. They use the Azure Portal to create a new App Service: select the ‘Web App’ option, pick Python as the runtime stack, choose a unique name, select the Free (F1) tier, and set up the deployment for their code. This process allows their developer to push updates easily and keep the website running with minimal effort.

Use Case: A beginner IT professional at a company is tasked with creating an internal employee portal that requires single sign-on with Office 365. They follow step-by-step instructions to set up an Azure App Service, select Node.js (since the Office Add-in uses JavaScript), configure the app to work with their organization’s Azure Active Directory, and deploy the Office Add-in for seamless SSO access. This enables secure, centralized access to company tools without complex infrastructure management.

For more information see these links:


Configure certificates and Transport Layer Security (TLS) for an App Service

  • TLS (Transport Layer Security) protects data sent between your App Service and users by encrypting traffic. By default, Azure App Service uses HTTPS—which applies TLS—to secure communication on its default domain. When you add a custom domain, you must configure a TLS/SSL certificate to maintain this security.
  • You can manage certificates in Azure App Service in several ways: create a free App Service managed certificate, import an App Service certificate, upload your own certificate (.pfx), or import a certificate from Azure Key Vault. The best option depends on whether you have custom requirements or just need basic encryption for a public-facing app.
  • To configure TLS for a custom domain, you first upload or create a certificate in the App Service, then add a TLS/SSL binding in the Azure portal. You can select either SNI SSL (to support multiple domains on one IP) or IP-based SSL (less common). Azure recommends SNI SSL for most scenarios.

Example: Imagine an IT startup launches a web application for customers at myapp.azurewebsites.net. Users access the site securely via HTTPS by default. As the startup grows, they register the custom domain myapp.com and link it to their App Service. To secure this custom domain, they use the Azure portal to generate a free App Service managed certificate and attach it to the domain—ensuring all customer data is encrypted over HTTPS.

Use Case: A small business with minimal Azure experience wants to launch a company website using Azure App Service. They purchase the custom domain ‘contoso.com’, configure it on their App Service, use the built-in option to create a free managed certificate, and set the minimum TLS version to 1.2 in the settings. This provides secure HTTPS access for customers without complex manual certificate management.

For more information see these links:


Map an existing custom DNS name to an App Service

  • To map an existing custom DNS name to an Azure App Service, you first need to create the correct DNS records with your domain provider. For subdomains, use a CNAME record pointing to your app’s default Azure URL. For root domains, use an A record and a TXT record for verification.
  • Verify ownership of your custom domain in the Azure portal by adding a DNS TXT record with the verification ID that Azure supplies. This ensures that only the owner of the domain can link it to an App Service.
  • After DNS records are configured, bind your custom domain name to your App Service in the Azure portal by navigating to the ‘Custom domains’ section and following the prompts to validate and add your domain.
  • It’s important to validate your setup using tools like ‘nslookup’ to ensure the DNS changes have propagated and your custom domain correctly points to your Azure-hosted app.
  • Securing your custom domain with SSL/TLS certificates is recommended for safe, encrypted connections. Azure provides free managed certificates, and you can also bring your own if needed.

Example: Suppose you own ‘mycompany.com’ and have built a company website using Azure App Service. To use ‘www.mycompany.com’ instead of the default ‘mycompany.azurewebsites.net’, you log in to your domain provider, create a CNAME record for ‘www’ pointing to your Azure app URL, add the TXT verification record as instructed by Azure, and then validate/add the custom domain in the Azure portal.

Use Case: An IT professional new to Azure wants to launch a business web app with a recognizable, branded URL. By connecting ‘www.businessname.com’ to Azure App Service, they can present a professional front to customers and make use of Azure’s scalable, secure hosting without deep domain management expertise.

For more information see these links:


Configure backup for an App Service

  • Azure App Service allows you to create custom backups of your app’s files and configuration to a secure Azure Storage account. This helps protect your app against accidental changes or data loss.
  • You can configure backups to run on demand or set up a schedule using the Azure Portal. Backups can be set to include or exclude certain folders and files using a special ’_backup.filter’ file to save space or avoid unnecessary data.
  • Starting in 2028, custom backups will no longer support backing up linked databases (such as Azure SQL or MySQL). You should use the native backup tools provided by each database service for database backups.
  • If your App Service is integrated with an Azure Virtual Network, you can back up and restore over the network for enhanced security, making sure data is only transferred within your protected environment.
  • Restoring from a backup is straightforward and can be performed directly in the Azure Portal or using the Azure CLI. You can restore an app to its original state or to a deployment slot to minimize downtime.

Example: A small online retailer running a web app on Azure App Service sets up automated daily backups to a dedicated Azure Storage account. They use a _backup.filter file to exclude large static image files that rarely change, ensuring the backups finish quickly and stay within the storage limits.

Use Case: A new Azure user develops an internal tool for their company using Azure App Service. To prevent potential data loss from accidental updates or deletion, they configure weekly backups of the app files and settings to a secure storage account. Later, when an unexpected change causes the app to break, the user quickly restores the app to its previous working state using the backup.

For more information see these links:


Configure networking settings for an App Service

  • Understanding Network Configuration Options: Azure App Service allows you to configure important network settings such as FTP access, remote debugging, and private endpoint connections. These settings help control how your app interacts with networks for deployment, debugging, and secure communications.
  • Configuring Settings via Azure Portal or CLI: You can change networking settings at the App Service Environment level using the Azure Portal for a simple graphical interface, or Azure CLI for command-line control. For example, you can enable FTP, remote debugging, or private endpoints based on your needs.
  • Using Private Endpoints for Security: Enabling private endpoints ensures your App Service communicates securely within your organization’s virtual network, restricting public internet access. This improves security and compliance, especially useful for sensitive business applications.
  • DNS and Environment Variables: DNS configuration and environment variables such as WEBSITE_VNET_ROUTE_ALL and WEBSITE_PRIVATE_IP help control app connectivity, hybrid connections, and virtual network routing, making it easier to integrate your app with other Azure and on-premises resources.

Example: Imagine you have a simple company website running on Azure App Service. You want only your internal team to access the site’s back-end for updates via FTP. You go into the Azure Portal, select your App Service Environment, and enable FTP access, ensuring only authorized team members can upload files securely.

Use Case: A small IT consulting firm creates an internal project management app using Azure App Service. To keep company data secure, the firm enables private endpoint connections, restricting access so that only users within their Azure Virtual Network (such as employees connecting from their corporate office or via VPN) can reach the app. This protects sensitive client and business information from potential external threats.

For more information see these links:


Configure deployment slots for an App Service

  • Deployment slots in Azure App Service allow you to create different versions of your web app, such as ‘production,’ ‘staging,’ or ‘development.’ This lets you test changes in isolation before updating the version users see.
  • Each slot runs independently with its own configuration and can connect to different branches or sources in your code repository (for example, different GitHub branches). This helps teams safely develop and test new features.
  • You can easily swap slots (for example, move the staging slot to production), making it simple to update your live app with minimal downtime. If an issue occurs, you can quickly swap back.
  • Slots can be set up and managed directly in the Azure portal or automated using tools like Terraform. This is helpful for teams working with DevOps practices.
  • Testing your app in a slot before making it live reduces the risk of unexpected errors. This is crucial for keeping business applications reliable and user-friendly.

Example: Imagine you are building a company website in Azure App Service. You create a ‘production’ slot for customers to view and a ‘staging’ slot to test updates before they go live. You deploy updates from the ‘development’ GitHub branch to the staging slot, check everything works, then swap it with production when ready.

Use Case: An IT team for a small business wants to update their internal HR app without affecting employees’ daily tasks. They use deployment slots to preview changes in the staging slot. After successful testing, they swap the new version into production, ensuring a smooth and error-free update.

For more information see these links:


Implement and manage virtual networking (15–20%)


Configure and manage virtual networks in Azure


Create and configure virtual networks and subnets

  • A virtual network (VNet) in Azure allows you to securely connect Azure resources, like virtual machines, using private IP addresses within a designated address range. You must define an address space (such as 10.0.0.0/16) when you create the VNet.
  • Subnets divide a virtual network into smaller segments, each with its own address range (for example, 10.0.0.0/24). Subnets help you organize resources, apply security controls, and manage traffic within your network.
  • You can create and configure VNets and subnets using various methods: the Azure portal (graphical user interface), Azure CLI (command line), or PowerShell. The steps involve specifying resource group, region, address spaces, and subnet details.
  • It’s important to size your address ranges and subnets properly so you have enough IP addresses for your virtual machines and other resources, and so your ranges don’t overlap with existing networks.
  • After creating subnets, you can later add, modify, or delete them to adapt your network structure as your requirements change, all through the portal, CLI, or PowerShell.

Example: Suppose a small IT company wants to run a web application and a database in Azure. They create a virtual network with the address range of 10.0.0.0/16. Inside this VNet, they set up two subnets: ‘web-tier’ with 10.0.1.0/24 for their web servers, and ‘db-tier’ with 10.0.2.0/24 for their database servers. This structure lets them apply different security rules to each subnet and manage the flow of traffic between tiers.

Use Case: A beginner just starting with Azure needs to set up a test environment for a new application. By creating a virtual network and separate subnets for app servers and backend services, they can isolate parts of their app, apply tailored access controls, and easily scale up resources without conflicts, all while following best practices for security and management.

For more information see these links:


Create and configure virtual network peering

  • Virtual network peering in Azure enables direct, private connectivity between two virtual networks. This allows resources such as virtual machines in different networks to communicate with each other securely without traffic leaving Azure’s backbone network.
  • You can create virtual network peering using the Azure portal, PowerShell, or Azure CLI. The process involves selecting which networks to peer, configuring access and routing settings, and confirming the connection. Most settings default to secure communication, but you can change them to suit your needs.
  • Peering works across different subscriptions and regions, provided you have the required permissions. It’s important to review access controls and select appropriate peering options, such as allowing gateway transit or traffic forwarding, based on your network topology.
  • Peering does not merge the networks. Each network retains its own configurations (such as subnets and security groups), but you can set rules to allow or block traffic between them. Monitor and manage existing peerings easily from the Azure portal.
  • You can edit or remove a peering if requirements change. Always refresh and check the peering status to ensure connections are active and properly configured. Regularly review your peerings for security and performance optimization.

Example: Imagine a company has two Azure virtual networks: one for its internal application servers and one for its database servers. Using virtual network peering, they can allow the application servers to communicate securely with the database servers without exposing traffic to the public internet. The setup can be completed in minutes via the Azure portal by selecting the networks and confirming the peering settings.

Use Case: A new IT professional is developing a web application in Azure that uses resources from separate teams: one team owns a database in vnet-db, and another team owns web servers in vnet-web. By configuring virtual network peering between vnet-db and vnet-web, the web servers can efficiently query the database, enabling cross-team collaboration without complex VPN setups or public exposure.

For more information see these links:


Configure public IP addresses

  • A public IP address in Azure enables resources like virtual machines or firewalls to communicate with the internet or external services. By default, Azure resources inside a virtual network can’t be accessed from outside unless you assign them a public IP address.
  • You can create either dynamic or static public IP addresses. Dynamic addresses might change when a resource is stopped and started again, while static addresses remain the same throughout the resource’s lifetime. Choosing the right type depends on whether you need a consistent IP address for things like DNS records.
  • Assigning a public IP address is straightforward in the Azure portal. For example, you can add a public IP to a virtual machine by selecting its network interface and associating a public IP under the IP configurations section. Similarly, Azure Firewalls must have at least one public IP to handle traffic from the internet.
  • Public IP addresses have configuration options such as choosing between IPv4 and IPv6, setting routing preference (Microsoft network or Internet), and selecting an availability zone for resilience. These settings help optimize performance, cost, and availability.
  • It’s important to regularly review and manage public IP addresses to avoid unnecessary costs and security risks. Unused public IP resources can still incur charges, and exposing a resource directly to the internet should be done with proper security controls in place.

Example: Imagine you have a simple website hosted on an Azure Virtual Machine for your small business. To make this website accessible to your customers over the internet, you assign a public IP address to the VM. This way, anyone can enter the IP address (or a linked domain) in their browser and reach your site.

Use Case: A new IT admin at a company needs to set up remote access for the support team. They create a virtual machine in Azure to run essential tools, then configure a public IP address for the VM so authorized staff can securely access it over the internet using remote desktop, after setting up firewall rules for protection.

For more information see these links:


Configure user-defined network routes

  • User-defined routes (UDRs) in Azure let you control how traffic flows within your virtual networks. By creating custom routes, you can override Azure’s default routing decisions and specify the path for specific traffic.
  • To configure UDRs, you first create a route table within the Azure portal. Then, you add individual routes to this table, defining the destination address prefix and the next hop, such as a virtual appliance (like a firewall) or another device.
  • Once routes are added to a route table, you associate that route table to one or more subnets. This determines which subnets use your custom routes, allowing you to direct traffic as needed for specific workloads or security needs.
  • UDRs are commonly used to route traffic through network security appliances, block direct outbound internet access, or connect to on-premises environments securely through VPN gateways.
  • Azure prioritizes user-defined routes over system routes and BGP routes when there’s a conflict, ensuring your custom routing rules take precedence where needed.

Example: Suppose you have a virtual network with multiple subnets, and you want all outbound traffic from one subnet to pass through a firewall virtual appliance for inspection. You create a UDR in a route table that sets the next hop for all traffic (0.0.0.0/0) to the firewall’s IP address. Then, associate the route table to the subnet. Now, all outbound traffic from that subnet is routed through the firewall before reaching its destination.

Use Case: An IT administrator new to Azure needs to ensure that sensitive data from a database subnet does not leave the network unfiltered. By configuring a user-defined route, the administrator directs all outbound traffic from the database subnet to an Azure Firewall virtual appliance for monitoring and policy enforcement. This setup improves security and compliance for the organization’s cloud infrastructure.

For more information see these links:


Troubleshoot network connectivity

    1. Check Network Security Group (NSG) rules: NSGs control inbound and outbound traffic to Azure resources. If a connection isn’t working, review the effective security rules applied to your virtual machine or subnet to make sure the required ports or IP addresses aren’t blocked.
    1. Use Azure Network Watcher’s diagnostic tools: Azure provides tools like IP flow verify (to check if traffic is allowed/denied), NSG diagnostics (to detect traffic filtering issues), and Connection troubleshoot (to test connectivity and see where traffic is getting blocked). These tools can provide step-by-step insights into network issues.
    1. Verify routes with Next Hop: If your resources can’t communicate, use the ‘Next hop’ diagnostic to check where your network packets are being routed. Misconfigured routes or missing route entries can sometimes cause connectivity problems.
    1. Capture and analyze packets: The Packet Capture tool lets you collect real network traffic from your Azure VMs. Analyzing this data can help you identify where communication fails, such as dropped packets or unexpected network behavior.
    1. Always check network configuration changes: If you’ve recently modified virtual network settings or reconfigured a device, make sure the changes are correct and reload the configuration. A mismatched or outdated config often causes connectivity issues.

Example: You have a virtual machine in Azure that can’t connect to a database running on another VM in the same virtual network. Using Azure Network Watcher’s Connection troubleshoot tool, you discover that an NSG rule is blocking traffic on the database port. Updating the NSG rule resolves your issue, and the connection is restored.

Use Case: An IT administrator new to Azure needs to deploy a web application. After setup, users report that the application is unavailable. The administrator uses Network Watcher’s IP flow verify and NSG diagnostics to quickly identify a missing rule in the NSG that was blocking HTTP traffic. After fixing the NSG, users can access the application.

For more information see these links:


Configure secure access to virtual networks


Create and configure network security groups (NSGs) and application security groups

  • Network Security Groups (NSGs) are used in Azure to control inbound and outbound network traffic to resources like virtual machines. NSGs act as virtual firewalls, allowing or blocking traffic based on defined security rules.
  • Creating an NSG involves specifying its name, resource group, and region in the Azure portal, PowerShell, or CLI. You can then add security rules to define which traffic should be allowed or denied, such as allowing HTTP traffic on port 80 to web servers.
  • Application Security Groups (ASGs) let you group network interfaces from servers with similar roles, such as web, database, or management servers, and apply NSG security rules to these logical groups. This makes managing and scaling security much easier, as you don’t have to configure rules for each individual IP address.
  • When you link NSGs and ASGs, you can create rules that target entire groups of resources, not just individual machines. For example, you can create a rule that only allows management traffic to servers in the ‘asg-mgmt’ group and restrict web traffic to servers in the ‘asg-web’ group.
  • To configure secure virtual network access, create an NSG, define relevant rules (like allowing only specific ports), and create ASGs to group related servers. Then, reference ASGs in NSG rules to enforce security at scale and simplify maintenance.

Example: An IT team runs a web application with several virtual machines in Azure. They create an ASG called ‘asg-web’ for all web servers and an NSG with a rule to allow inbound HTTP traffic only to this group. This setup ensures only the web servers receive HTTP traffic, keeping other resources secure.

Use Case: A beginner Azure administrator in an IT company needs to secure a virtual network hosting a web application. By creating an NSG to restrict traffic and using ASGs to group web and management servers, the admin can easily apply security rules that allow internet traffic to the web servers, while limiting access to management servers to only trusted IPs for remote administration.

For more information see these links:


Evaluate effective security rules in NSGs

  • Network Security Groups (NSGs) filter inbound and outbound network traffic to Azure resources like virtual machines by using security rules. NSGs can be applied at both the subnet level and the individual network interface level (NIC).
  • The effective security rules for a resource are a combination of all rules from the NSGs associated with both the subnet and the NIC. For inbound traffic, subnet NSG rules are evaluated first, then NIC NSG rules. For outbound traffic, this order is reversed.
  • NSG rules are processed in order of priority. Lower priority numbers mean higher precedence. Once a packet matches a rule, no more rules are evaluated. This ordering ensures specific rules can override more general default rules.
  • Tools like Azure Network Watcher’s ‘Security group view’ and ‘IP flow verify’ help visualize and diagnose effective NSG rules. These tools allow you to see which rules allow or deny specific types of traffic and help prevent unintended network exposure.
  • Always review effective security rules after making NSG changes to ensure you haven’t accidentally blocked or allowed unwanted traffic, especially when using both subnet-level and NIC-level NSGs.

Example: Suppose you have a web server VM inside a subnet. The subnet NSG allows HTTP and HTTPS inbound traffic from the internet (for all VMs in the subnet). However, on the VM’s NIC NSG, you add a rule to deny all inbound internet traffic except requests from your company IP. In this setup, even though the subnet NSG allows the traffic, the VM is protected by the more restrictive NIC NSG rule. This layered approach controls access precisely.

Use Case: An IT administrator new to Azure sets up a virtual network with multiple VMs, assigning some as web servers and others as databases. They apply an NSG at the subnet level to permit necessary web traffic for the web servers but use NIC-level NSGs on database VMs to restrict access further, blocking all inbound internet traffic. By evaluating effective security rules on each NIC using Azure Network Watcher, the administrator confirms that only the intended traffic gets through, strengthening network security and meeting organizational requirements.

For more information see these links:


Implement Azure Bastion

  • Azure Bastion is a managed service that provides secure and seamless RDP (Remote Desktop Protocol) and SSH (Secure Shell) connectivity to virtual machines in your Azure virtual network, directly through the Azure portal. It removes the need for public IP addresses on your VMs, reducing exposure to potential internet threats.
  • With Azure Bastion, you connect to your VMs over SSL using your web browser. There is no need to install any client software or configure inbound network security rules for RDP or SSH ports, which makes the process simple and safe for remote management.
  • Setting up Azure Bastion involves creating a dedicated subnet called ‘AzureBastionSubnet’ in your virtual network, deploying a Bastion host, and assigning a public IP to the Bastion host itself. The Azure Bastion host then acts as a secure gateway for administrators to access any VM in the network using their private IPs.
  • Azure Bastion supports remote work by allowing IT staff and developers to manage applications and VMs from anywhere with internet access, without the risks associated with exposing public endpoints or managing VPN infrastructure.
  • Pricing for Azure Bastion starts as soon as the resource is provisioned, so it is important to delete test Bastion hosts when you’re finished using them to avoid unnecessary charges.

Example: Imagine your IT administrator needs to troubleshoot a virtual machine (VM) running a business-critical web application. With Azure Bastion, the admin can log in to the Azure portal from their home office, securely connect to the VM using RDP or SSH, and perform maintenance—without needing to expose the VM to the internet or configure a VPN.

Use Case: A small tech company with remote employees uses Azure Bastion to enable its IT team to securely manage Windows and Linux VMs hosting company resources. Since team members work from different locations, Bastion ensures that administrative access is always available via the Azure portal, while keeping the VMs protected behind the network firewall without any public IP exposure.

For more information see these links:


Configure service endpoints for Azure platform as a service (PaaS)

  • Azure Virtual Network service endpoints allow you to connect securely to Azure platform as a service (PaaS) resources—such as Azure Storage, Azure SQL Database, and more—by restricting resource access to specific virtual network subnets. This helps prevent unwanted access from public internet sources.
  • When a service endpoint is enabled for a subnet, traffic between resources in that subnet and the chosen Azure service travels over Microsoft’s backbone, using the virtual network’s private IP addresses as the source. This makes it possible to configure fine-grained firewall rules that only allow access from that subnet.
  • Configuring a service endpoint is straightforward: you select your virtual network, choose the desired subnet, and enable the service endpoint for the Azure PaaS service you wish to secure. You can then adjust the PaaS resource’s access controls (such as firewall or network rules) to allow connections only from your subnet.
  • Service endpoints do not require new DNS entries or private IPs for PaaS resources; the resource’s public IP address is still used for DNS resolution. The key security benefit is enforced restrictions at the network level, reducing the risk of data exposure.
  • Comparing service endpoints with private endpoints: Service endpoints provide secure route to PaaS resources for the entire subnet, while private endpoints (using Azure Private Link) provide a dedicated private IP in your virtual network mapping directly to a specific resource. Beginners typically start with service endpoints before exploring private endpoints for stricter isolation.

Example: Imagine you work in IT at a small company that uses Azure Storage for confidential files. By enabling a service endpoint for Azure Storage on your ‘internal-resources’ subnet, you ensure that only virtual machines and services within that subnet can access your company’s storage account—blocking access from the public internet and all other subnets.

Use Case: A beginner Azure administrator deploys an Azure SQL Database to store sensitive customer data. To comply with security policies, they configure a service endpoint for the database on the ‘app-backend’ subnet of their virtual network. This restricts database access to only servers in that subnet and prevents unwanted internet or cross-network access, securing critical data.

For more information see these links:


Configure private endpoints for Azure PaaS

  • Private endpoints allow you to securely connect to Azure PaaS services (like Azure Storage or SQL Database) from within your own Azure virtual network, using a private IP address. This keeps your data traffic off the public internet and ensures better security.
  • With private endpoints, you can restrict access to Azure PaaS resources so only your virtual networks or on-premises networks (via VPN or ExpressRoute) can reach them. This helps safeguard sensitive information and reduces the attack surface.
  • Setting up a private endpoint is straightforward: you select the PaaS resource, create a private endpoint in your virtual network, and configure your resource to block public access. You can further secure access with Network Security Groups (NSGs) and other Azure network policies.
  • Private endpoints are mapped to specific resources (for example, a single storage account or database), not to entire Azure services, which prevents accidental access or data leaks between unrelated resources.
  • This approach simplifies network management. You don’t need to manage public IPs or complex routing. Azure handles the connectivity for you over its backbone, meaning your data securely stays inside Microsoft’s infrastructure.

Example: Imagine your company hosts an intranet web app on Azure App Service. By configuring a private endpoint, your internal users can securely access the app over your private company network, without exposing the app to the public internet.

Use Case: A new IT professional is tasked with storing sensitive company documents in Azure Blob Storage. To comply with security requirements, they set up a private endpoint so only users connected through the company’s secure virtual network can access the storage account, preventing data leaks and unauthorized public access.

For more information see these links:


Configure name resolution and load balancing


Configure Azure DNS

  • Azure DNS allows you to host your domain’s DNS records in Azure, making it easy to manage name resolution for your applications and resources. Once your DNS zone is created, you can add, update, or remove records directly in the Azure portal.
  • To configure Azure DNS, you first create a DNS zone for your domain (e.g., contoso.xyz). Then, you add DNS records such as ‘A’ records (which map host names to IP addresses) or CNAME records (which alias one name to another), enabling users to reach services hosted on Azure.
  • For your DNS configuration to work across the internet, you need to update your domain registrar to point to Azure’s DNS name servers. This step ensures that when someone tries to reach your domain, their request is routed through Azure DNS, resolving the correct IP address for your services.
  • You can manage and troubleshoot DNS records using the Azure portal, Azure CLI, or PowerShell. Verification can be done using tools like nslookup or digwebinterface to confirm that your domain resolves correctly.
  • Setting the correct Time-to-Live (TTL) is important, as it determines how long DNS responses are cached by clients and DNS servers. Lower TTL values allow quicker propagation of changes but can increase DNS traffic.

Example: Imagine you own a small business and have just purchased the domain ‘mycompany.com’. You decide to host your website using Azure Web Apps. In the Azure portal, you create a DNS zone for ‘mycompany.com’, then add an ‘A’ record to map ‘www.mycompany.com’ to your web app’s public IP address. Finally, you go to your domain registrar’s website and update the name server settings to use Azure’s DNS servers. Now visitors typing ‘www.mycompany.com’ in their browsers are correctly routed to your Azure-hosted website.

Use Case: An IT administrator at a company new to Azure needs to migrate their existing company website to Azure. They use Azure DNS to configure name resolution by creating a DNS zone for the company’s domain and setting up A records for web servers and CNAME records for service endpoints. This streamlines DNS management and ensures high availability and rapid updates for the company’s online services.

For more information see these links:


Configure an internal or public load balancer

  • Azure Load Balancer distributes incoming network traffic across multiple virtual machines (VMs), increasing reliability and availability for your applications. You can use two main types: internal and public load balancers.
  • A public load balancer routes external traffic from the internet to your Azure resources, such as web servers. You need to create a public IP address, choose IPv4 or IPv6, set allocation method (static/dynamic), and select an availability zone for resilience.
  • An internal load balancer distributes traffic only within your private Azure virtual network. When configuring, you select the target virtual network, subnet, choose a private IP (static/dynamic), and pick an availability zone for high availability.
  • Configuration is mostly done in the Azure portal by creating a load balancer resource, adding frontend IP configurations and backend pools, and defining load-balancing rules.
  • Load balancers are essential for scaling applications, ensuring seamless failover, and maintaining service uptime. Beginners should start with the Azure portal’s guided steps to set up basic load balancing.

Example: Suppose your company is deploying a website using multiple Azure VMs. You set up a public load balancer so when users visit your site, traffic is automatically distributed across all the VMs. If one VM goes down, the load balancer directs visitors to the remaining healthy VMs, so your website stays online.

Use Case: An IT technician new to Azure needs to make an internal business app available to staff members only. They configure an internal load balancer in the Azure portal, linking it to their virtual network and backend VMs, so requests are routed securely and evenly among staff-facing app servers.

For more information see these links:


Troubleshoot load balancing

  • Check health probe configuration and status: Azure Load Balancer relies on health probes to determine which backend VMs are healthy and able to receive traffic. If health probes are misconfigured or blocked by firewalls or Network Security Groups (NSGs), traffic may not be distributed properly. Verify that health probes are set up for the correct ports and protocols and are allowed through any NSGs.
  • Diagnose backend pool connectivity issues: Backend VMs must be accessible for the load balancer to distribute traffic. If VMs are unreachable, check network connectivity, firewall settings on the VMs, and NSGs applied to the subnet or NIC. Use built-in Azure diagnostics tools, such as network trace and Ping tests (ps ping), to troubleshoot.
  • Resolve issues with load-balancing rule changes: When using virtual machine scale sets behind a load balancer, you cannot modify the backend port of an active load-balancing rule that has a health probe assigned. To change the port, remove the health probe temporarily, make the necessary port update, then reassign the health probe.
  • Monitor and respond to load balancer health states: Periodically review standard Azure metrics like Health Probe Status and Data Path Availability to ensure load balancer and backend resources are healthy. If a load balancer goes into a failed state, use Azure Resource Explorer to update its status and resolve provisioning issues.
  • Collect diagnostics and logs for advanced troubleshooting: If basic checks do not resolve the problem, collect network traces and logs from backend VMs and load balancer resources. These details are crucial when submitting support requests to Microsoft Azure.

Example: Imagine you set up an Azure Load Balancer to distribute web traffic to three virtual machines. After removing one VM for maintenance, you notice it still receives some network traffic. After investigation, you find that DNS and background storage operations are still trying to reach it. You verify connections using network trace tools and nslookup to confirm the source of the traffic.

Use Case: A new IT administrator in Azure deploys a load balancer to support a web application. Unexpectedly, some VMs stop responding to incoming requests. The administrator checks the health probe metrics and finds probes are failing. Using the troubleshooting steps—verifying probe ports, inspecting firewall and NSG settings, and using Azure diagnostics—the administrator resolves the issue and restores load-balanced traffic.

For more information see these links:


Monitor and maintain Azure resources (10–15%)


Monitor resources in Azure


Interpret metrics in Azure Monitor

  • Metrics in Azure Monitor are numerical data points automatically collected from various Azure resources, such as virtual machines, web apps, and storage accounts. They provide near real-time information about resource health and performance.
  • You can visualize metrics easily using charts and dashboards in Azure Portal. For example, plotting CPU usage of a virtual machine over time helps you quickly identify trends or sudden spikes that may require attention.
  • Metrics help you set up alerting rules: when a metric crosses a defined threshold (like high CPU or low free memory), Azure Monitor can notify you or trigger automated actions such as scaling up resources or running a script.
  • Metrics can be correlated with logs for deeper troubleshooting. By investigating anomalies in metrics, you can use the ‘Drill into Logs’ feature to identify root causes like failed requests or errors.
  • Azure Monitor metrics can be customized and filtered for different time ranges, resource instances, and aggregation methods (average, max, min), making them highly adaptable for monitoring specific scenarios.

Example: Suppose you manage a website hosted in Azure App Service. You monitor the ‘Http 5xx’ errors metric, which counts server-side failures. If you see a sudden spike on the chart in the Azure Portal, you can use the ‘Drill into Logs’ feature to review failed requests and troubleshoot the cause (like a code issue or external dependency failure).

Use Case: An IT administrator new to Azure wants to ensure their company’s online store is always available. They set up alerts in Azure Monitor for metrics like CPU percentage and Http errors on the web app. When the CPU spikes or errors increase, they automatically receive notifications to check for performance issues or outages, helping them maintain uptime and customer satisfaction.

For more information see these links:


Configure log settings in Azure Monitor

  • Azure Monitor allows you to configure log settings by setting up diagnostic settings for each Azure resource. You can specify what categories of data and metrics you want to collect (such as activity logs, error logs, or performance data).
  • You can choose where the logs are stored or sent, such as to a Log Analytics workspace for detailed queries, to Azure Storage for archiving, or to Event Hubs for integration with other systems. Each destination helps with different goals, like troubleshooting or compliance.
  • The configuration process is straightforward: in the Azure portal, select your resource, navigate to Monitoring > Diagnostic settings, and create or adjust settings as needed. You can choose which log categories to collect, the destinations, and manage up to five diagnostic settings per resource.
  • For some Azure services, such as API Management or Container Apps, you have specific log limits and options. For example, API Management limits log entry size, and Container Apps allow configuring logs at both the environment and app level, including choosing which system or application logs to send.
  • Actionable best practice: Always configure diagnostic settings for critical resources to ensure you can monitor, troubleshoot, and audit operations effectively. Saving logs to Log Analytics is often recommended for new users because it makes running queries and gaining insights easy.

Example: Imagine you deploy a web application on Azure Container Apps. To monitor its health and performance, you go to Monitoring > Logging options in the Azure portal, select ‘Azure Monitor’ as the log destination, and set up diagnostic settings to send container console logs to a Log Analytics workspace. This way, you can use built-in queries to track errors or view traffic trends without extra setup.

Use Case: A new Azure admin wants to monitor a company’s API endpoints for failures and security issues. By configuring log settings in Azure Monitor for their API Management instance, they can collect gateway and application logs in Log Analytics, create alerts for failed requests, and quickly troubleshoot problems using intuitive portal queries.

For more information see these links:


Query and analyze logs in Azure Monitor

  • Azure Monitor automatically collects log data from your resources, such as virtual machines, Azure SQL databases, and applications, making it easy to centralize and view activity and diagnostic events.
  • You can query and analyze these logs using Log Analytics in the Azure portal, which provides an intuitive interface and ready-made queries for many resource types, so beginners do not need to learn Kusto Query Language (KQL) right away.
  • Running log queries helps you uncover trends, spot issues, and gain insights into your environment, such as identifying unusual spikes in resource usage or failed operations.
  • Results of log queries can be visualized as tables or charts and pinned to dashboards, shared with others, or used to set up automated alerts and actions when certain conditions in the log data are met.
  • As you get comfortable, you can start customizing and writing your own queries in KQL to answer more advanced questions and automate monitoring tasks.

Example: Imagine you manage several virtual machines in Azure. If users start reporting slower performance, you can use Log Analytics to run a simple, predefined query to check CPU usage across all VM logs for the past 24 hours, instantly seeing whether any VM is using more resources than expected.

Use Case: A new IT administrator monitors the company’s Azure web app to ensure it remains available and responsive. The admin uses Azure Monitor’s Log Analytics to query error logs for the last week, quickly identifying an increase in ‘Timeout’ errors after a recent deployment. This helps them pinpoint the cause and resolve the issue before users are affected further.

For more information see these links:


Set up alert rules, action groups, and alert processing rules in Azure Monitor

  • Alert rules in Azure Monitor let you define specific conditions (like high CPU usage or failed backups) that trigger alerts when met. This helps you stay informed about important activities or issues in your resources.
  • Action groups are collections of notification preferences and actions, such as sending emails to IT administrators, triggering webhooks, or running automation scripts. When an alert fires, the associated action group determines who is notified and what actions are taken.
  • Alert processing rules help you manage alerts at scale by automating and refining how notifications are delivered. For example, you can add or suppress action groups for multiple alerts at once, apply rules on schedules (like suppressing alerts during maintenance windows), and apply logic across entire subscriptions or resource groups.
  • You can set up and manage alert rules, action groups, and alert processing rules using the Azure Portal, Azure CLI, PowerShell, ARM templates, or REST API. This flexibility allows you to automate and script your alerting strategies according to your needs.
  • Alert processing rules provide an efficient solution for common scenarios, such as silencing alerts during planned maintenance or ensuring that critical alerts always notify the right people, without having to edit every individual alert rule.

Example: Imagine your team manages several virtual machines (VMs) on Azure. You set up an alert rule that triggers when CPU usage on any VM goes above 80%. You connect this alert to an action group that sends an email to the IT support team. To avoid unnecessary emails during scheduled maintenance, you create an alert processing rule that suppresses notifications from all VM alerts during the scheduled window.

Use Case: A new Azure administrator at an IT company needs to ensure all backup failures across the company’s multiple Azure subscriptions are promptly reported. They set up an alert processing rule to automatically add a central action group (which sends SMS and email alerts to the backup team) to all backup-related alerts, regardless of how or where the alert was generated, ensuring critical failures are never missed.

For more information see these links:


Configure and interpret monitoring of virtual machines, storage accounts, and networks by using Azure Monitor Insights

  • Azure Monitor Insights provides dedicated dashboards for monitoring different resource types—such as virtual machines, storage accounts, and networks—giving you an easy, visual way to track health, performance, and usage.
  • For virtual machines, VM Insights collects and displays data on CPU, memory, disk usage, running processes, and dependencies, helping you quickly spot issues and analyze trends over time.
  • Storage Insights aggregates metrics like capacity, transaction rates, and latency for storage accounts, so you can monitor performance and availability, and set alerts for critical thresholds.
  • Network Insights shows real-time topologies and health of Azure networking components, including traffic analytics, NSG flow logs, and connection monitoring, which helps you troubleshoot connectivity issues.
  • It’s simple to set up monitoring and alerts using Windows Admin Center or the Azure portal—once enabled, you can create email alerts for specified conditions (like high CPU usage or storage nearing capacity), keeping you proactively informed.

Example: Suppose you manage a small web application hosted on an Azure virtual machine. By enabling VM Insights, you get instant access to charts showing the VM’s CPU and memory usage. If the website starts running slowly, you can check these charts to see if there’s a spike in resource usage and set up alerts to be notified the next time performance drops.

Use Case: A new IT admin at a company uses Azure Monitor Insights to onboard all their virtual machines and storage accounts. With consolidated dashboards and automated email alerts, they quickly spot and resolve performance issues, like a VM running out of memory or a storage account approaching its capacity limit, ensuring smooth operation without deep Azure expertise.

For more information see these links:


Use Azure Network Watcher and Connection Monitor

  • Azure Network Watcher provides monitoring and diagnostic tools to help you understand and troubleshoot your Azure network resources. It helps you identify connectivity problems, track traffic, and view network topology in real time.
  • Connection Monitor, a feature within Network Watcher, continuously checks the connection between endpoints (such as virtual machines, application gateways, or on-premises servers). It can detect common issues like DNS resolution failures, unreachable destinations, blocked ports, or certificate problems.
  • You can use Connection Monitor to automatically alert you when there are network issues, helping you resolve problems quicker. It can display network diagrams, highlight where failures occur, and suggest troubleshooting steps, making networking problems easier to understand and act upon.

Example: Imagine you work in an IT support role and users are reporting that an application hosted in Azure is slow or periodically unreachable. By using Network Watcher and Connection Monitor, you can visualize the network path, test the connectivity between relevant virtual machines or endpoints, and quickly determine if the issue is caused by a DNS problem, a firewall rule, or a failing route.

Use Case: A new IT administrator is tasked with ensuring a hybrid cloud setup (with resources both in Azure and on-premises) remains connected. They set up Connection Monitor to continuously check connections between their on-premises server and Azure-hosted virtual machine. When a VPN connection drops or a certificate expires, the Connection Monitor immediately detects the issue and sends an alert, enabling the administrator to fix the problem before users are affected.

For more information see these links:


Implement backup and recovery


Create a Recovery Services vault

  • A Recovery Services vault is a storage entity in Azure used to manage and store backup data and recovery points for your IT resources, such as virtual machines, servers, and workloads.
  • Creating a vault involves specifying key details: the subscription where costs are tracked, the resource group for organizational purposes, a unique and recognizable vault name, and the Azure region where you want your backups to reside for compliance and performance reasons.
  • You do not need to manually set up storage accounts—the Recovery Services vault automatically handles backup storage, making backup management simple and secure.
  • For resources spread across multiple regions, you must create a separate Recovery Services vault in each region to ensure backups are properly protected and compliant with data residency requirements.
  • Immutable vaults offer an additional layer of security by preventing deletion of backup data until its scheduled expiry, protecting against threats like accidental deletion or ransomware.

Example: An IT administrator at a small company uses the Azure portal to create a Recovery Services vault called ‘CompanyVMVault’ in the West Europe region. They add the vault to their dashboard and configure backup policies for their virtual machines, ensuring daily backups are securely stored and can be restored in case of a system failure.

Use Case: A new-to-Azure IT professional wants to protect their virtual machines running critical business applications. By creating a Recovery Services vault in Azure and assigning backup policies, they ensure that in case their on-premises system fails—due to hardware issues or cyberattacks—they can restore their VM data quickly and minimize business downtime.

For more information see these links:


Create an Azure Backup vault

  • An Azure Backup vault is a secure, centralized storage entity in Azure where backup data and recovery points are kept. It streamlines backup management by providing a single location to monitor, configure, and restore backups.
  • Creating a Backup vault is simple: in the Azure portal, search for ‘Backup vaults,’ choose ‘Add,’ and fill in the basics like subscription, resource group, location (region), and storage redundancy options (Geo-redundant or Locally redundant).
  • Storage redundancy is an important choice: Geo-redundant ensures data is copied to a secondary region for disaster protection, while Locally redundant keeps data within one region at lower cost. Choose based on your backup priority and cost preference.
  • Once created, you can use the Backup vault to protect various Azure services like VMs, databases, managed disks, or even Azure Blobs. The vault manages backup policies, scheduling, and makes restoring data straightforward.
  • You can also automate Backup vault creation using Azure CLI, ARM templates, or REST API for larger or repeatable deployments, which is helpful if you manage many resources or work in larger organizations.

Example: Imagine you’re an IT administrator at a company moving to the cloud. You want to ensure your Azure virtual machines are backed up against accidental deletion or corruption. You use the Azure portal to create a Backup vault named ‘CompanyBackups’ in the East US region, choose geo-redundant storage for maximum protection, and then assign your VMs to be protected by this vault. Now, your backups are automatically managed and easily restorable if needed.

Use Case: A small IT company new to Azure needs to protect critical Azure SQL databases and virtual machines used for their customer support system. By creating a single Azure Backup vault, they can centralize the management of all their backups, ensure data recovery in case of accidental loss or attacks, and meet company policies without complex setup.

For more information see these links:


Create and configure a backup policy

  • A backup policy is a set of rules that determine when and how often your data is backed up, and for how long backups are kept. In Azure, policies can be customized for different data types (like files, blobs, or databases) to match your organization’s needs.
  • Creating a backup policy typically involves selecting the resources to be protected (such as specific volumes, storage accounts, or user accounts), setting the backup frequency (daily, weekly, monthly), and specifying the retention duration for each backup type.
  • You can attach a single backup policy to multiple resources, making it easier to manage backups across your environment. Policies can be edited or temporarily suspended, giving you flexibility depending on business changes.
  • Backup policies automate regular backups, reducing manual effort and helping ensure data is always recoverable. Manual backups can still be performed for special needs, such as before a planned system change.
  • When configuring a backup policy in Azure (for services like Azure NetApp Files, Azure Blobs, or Microsoft 365), always review service-specific resource limits and minimum requirements—for example, keeping at least 2 daily backups for NetApp Files.

Example: A small IT company uses Azure NetApp Files to store project files. To protect these files, they set up a backup policy that automatically creates daily backups, keeps the latest 15 daily backups, and also keeps weekly and monthly backup copies. This means that if any file is deleted or corrupted, they can easily restore it from any of the recent backups.

Use Case: A beginner Azure administrator is assigned to ensure compliance with the company’s data protection policy. They log into the Azure portal, navigate to Azure NetApp Files, and configure a backup policy for the project’s main file volume. They set daily backups to retain 15 copies, weekly backups to retain 6 copies, and monthly backups to keep 4. This protects the company’s files against accidental deletions or system failures, and meets regulatory requirements for data retention.

For more information see these links:


Perform backup and restore operations by using Azure Backup

  • Azure Backup is a cloud-native service that allows you to regularly create backups of your Azure resources, such as Virtual Machines, managed disks, and storage accounts (Blobs). These backups ensure that your data can be restored in case of accidental deletion, corruption, or disaster.
  • You can perform backup and restore operations using the Azure portal, Azure PowerShell, Azure CLI, or REST API. The Azure portal provides a simple point-and-click interface, while other methods offer automation and scripting capabilities.
  • Azure Backup offers two main types of backup for Blobs: operational backups (restore data in the same storage account to any point-in-time within the retention range) and vaulted backups (restore data to a different storage account using recovery points). For managed disks, backups use incremental snapshots that allow you to restore disks to a previous state quickly and cost-effectively.
  • To restore data, you select the appropriate backup instance and recovery point in the Backup center, then choose where to restore the data. Always check for any prerequisites—like breaking active leases on blobs or ensuring correct role assignments for vaulted backups—to avoid restore failures.
  • Centralized management is possible with the Azure Backup center, which lets you monitor all backup and restore operations, view compliance, set governance policies, and ensure that backups are running and recoverable across your organization.

Example: An IT administrator uses Azure Backup to schedule daily backups of a company’s virtual machines running business-critical applications. After a system update accidentally corrupts the VM, the administrator accesses Backup center in the Azure portal and restores the VM to its state from the previous day’s backup, minimizing downtime and loss of data.

Use Case: A small business new to using Azure stores invoices and client files in Azure Blob Storage. By configuring operational backups, the business can quickly restore files if someone accidentally deletes or overwrites them, ensuring business continuity without needing complex backup infrastructure.

For more information see these links:


Configure Azure Site Recovery for Azure resources

  • Azure Site Recovery provides automated replication, failover, and failback for Azure resources (such as virtual machines), helping maintain business continuity in case of outages.
  • To configure Site Recovery, you typically create a Recovery Services vault, identify the resources (like VMs) you want to protect, and enable replication to a target Azure region, storage account, and network.
  • The process includes mapping your source environment (networks, resource groups) to corresponding target resources in a different Azure region to minimize downtime during disaster recovery.
  • You should test your recovery plan through planned drills and failovers, ensuring that all replicated resources (such as VMs) are accessible and operational in the target region.
  • Ongoing management involves monitoring replication health and regularly updating or testing recovery plans to keep disaster recovery strategies effective and aligned with your organizational needs.

Example: A company running several Azure VMs that host critical applications sets up Azure Site Recovery to replicate these VMs to another Azure region. If the primary region experiences an outage, the company can quickly fail over to the secondary region and keep their applications running with minimal disruption.

Use Case: An IT admin new to Azure configures Azure Site Recovery for a small business’s Azure VMs hosting a remote desktop service (RDS) deployment. By following the step-by-step checklist—preparing VMs, setting up a Recovery Services vault, mapping networks, and testing the recovery plan—they ensure users can still access their desktops even if their primary Azure region suffers an unexpected outage.

For more information see these links:


Perform a failover to a secondary region by using Site Recovery

  • Failover to a secondary region using Azure Site Recovery means temporarily switching your virtual machines (VMs) and workloads from the main Azure region to another region when an outage or disaster happens.
  • Before initiating a failover, it’s crucial to ensure that replication is enabled and up-to-date between the primary and secondary regions. This process continuously copies VM data to the secondary site.
  • The failover process is user-driven and can be triggered from the Azure Portal. Once triggered, VMs are spun up in the secondary region using the latest replicated data, ensuring minimal downtime.
  • After performing the failover, you can quickly access your applications and data from the secondary region, allowing for continued business operations while the primary region is restored.
  • Once the primary region is healthy again, you can reprotect your VMs and perform a ‘failback’ to move operations back to the original region.

Example: Imagine you host a web application on Azure VMs in the East US region. One day, due to a major power outage, the East US region is temporarily unavailable. With Site Recovery enabled, you start a failover to the West US region through the Azure Portal. Your web app is soon back online in the West US region with all its latest data.

Use Case: A small IT team new to Azure manages a company’s payroll system, ensuring round-the-clock availability. They configure Azure Site Recovery to replicate critical VMs from the primary region to a secondary region. When a regional outage occurs, they use Site Recovery to manually fail over, maintaining access to payroll services with minimal disruption.

For more information see these links:


Configure and interpret reports and alerts for backups

  • Azure Backup offers built-in monitoring and reporting tools, such as Backup Reports and Azure Monitor, that help you track the status of your backup jobs, usage, and trends over time. These reports are useful for auditing and identifying areas to optimize your backups.
  • You can configure alerts in Azure Monitor to notify you (through email, SMS, or other channels) when backup jobs fail, complete, or encounter issues. You can customize which alerts you want to receive and set thresholds based on backup metrics.
  • It’s possible to manage how and when alerts are sent, including suppressing notifications during planned maintenance windows or only notifying about specific backup resources. This helps prevent unnecessary alerts and ensures you only get notified when needed.
  • Action Groups in Azure allow you to send backup alerts to different notification channels, such as email, ITSM, or even automation tools like Logic Apps. You can group notifications by importance or audience, making your backup monitoring more efficient.
  • Reports and alerts can be viewed and managed across multiple subscriptions, vaults, or even tenants (with Azure Lighthouse), providing centralized visibility and control, especially for organizations managing several Azure environments.

Example: Imagine you are running backups for several virtual machines in Azure. You set up an alert to notify you via email whenever any backup job fails. One morning, you receive an alert that the backup of a critical server did not complete successfully. You can quickly check the Azure portal, review the Backup Reports for details, and start troubleshooting the problem without delay—helping prevent data loss.

Use Case: A new Azure administrator for a small IT business configures backup reports and sets up alerts so that they are immediately notified if any scheduled backup does not complete successfully. By using the Azure portal, the administrator can monitor all backups, view historical success rates, and automatically receive emails for any issues, ensuring reliable data protection as the business grows.

For more information see these links: