feedburner
Enter your email address:

Delivered by FeedBurner


Configuring and securing connections to Azure SQL Database

Labels:

With Azure SQL Database, Azure configures the network for you and by default restricts all access using Azure SQL Database firewall rules, as discussed in Chapter 3, “Getting started with an Azure SQL Database.” If a firewall rule does not exist, Azure will reject all connection attempts from IP addresses that have not been whitelisted explicitly.

 Use database-level firewall rules in conjunction with contained users (discussed below) whenever possible to make your database more portable.

 Use server-level firewall rules when you have many databases that have the same access requirements and you don’t want to spend time configuring each database individually.

Additionally, Azure SQL Database requires encrypted connections at all times while data is “in transit” to and from the database. In your application’s connection string, you must specify parameters to encrypt the connection and not to trust the server certificate (this is done for you if you copy your connection string out of the Azure portal). If you do not, the connection will not verify the identity of the server and will be susceptible to “man-in-the-middle” attacks. For the ADO.NET driver, for instance, these connection string parameters are Encrypt=True and TrustServerCertificate=False. For more information, see Azure SQL Database Connection Encryption and Certificate Validation. The code block below shows a sample ADO.NET connection string.

Server=tcp:[your_sql_database_server_name_here].database.windows.net,1433;Database=[your_sql_database_name_here]>;User ID=[your_username_here]@[your_sql_database_server_name_here];Password={your_password_here};Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;

Unlike SQL Server in an Azure virtual machine, with Azure SQL Database you do not have to secure the operating system itself. However, all Azure subscription administrators have access to the SQL Database instance, and you should limit subscription administrators.

Source of Information : Migrating SQL Server Databases to Azure

Configuring and securing connections to SQL Server in an Azure virtual machine

Labels:

With an Azure virtual machine, you have several options to restrict and secure connections to your SQL Server instance. The virtual network for your Azure virtual machine is a logical isolation of the Azure cloud dedicated to your subscription. You can fully control the IP address blocks, DNS settings, security policies, and route tables within this network, similarly to how you use these mechanisms to control your on-premises network. You can also segment your virtual network into subnets to further control access to the virtual machines on your virtual network that host your SQL Server instances.

In addition, you can connect the virtual network to your on-premises network using one of the connectivity options available for Azure virtual machines. In essence, you can expand your on-premises network to your Azure virtual network, delivering the benefit of enterprise scale that Azure provides. Finally, you can set up and configure an Azure virtual machine as a domain controller and join your SQL Server virtual machine to this Azure domain controller. This Azure domain controller can be federated with Azure Active Directory, be federated with your on-premises Active Directory, or be a controller within your existing on-premises Active Directory. A full discussion of your options and best practices for configuring a secure connection to your on-premises network is beyond the scope of this ebook. If you do so, you can join your virtual machine to your on-premises Active Directory environment and use Windows user accounts for authentication.

In addition to configuring and securing the virtual network to which your Azure virtual network is connected, you should take these security steps:
 Use a unique local administrator account for your virtual machine that does not have a name of Administrator.

 Use complex strong passwords for all of your accounts, Windows and SQL Server. For more information about how to create a strong password, see the Create Strong Passwords article in the Safety and Security Center.

 Enable encrypted connections for your SQL Server instance and configure your SQL Server instance with a signed certificate.

 Use Windows firewall rules to control database engine access.

 If your virtual machines should be accessed only from a specific network, use network security groups (NSGs) to control traffic and restrict access to certain IP addresses or network subnets. An NSG contains access control rules that allow or deny traffic based on traffic direction, protocol, source address and port, and destination address and port.

Source of Information : Migrating SQL Server Databases to Azure

Migrating a compatible SQL Server database to SQL Database

Labels:

To migrate a compatible SQL Server database to Azure SQL Database, Microsoft provides several migration methods for various scenarios. The method you choose depends upon your tolerance for downtime, the size of your SQL Server database, and the speed and quality of your connection to the Microsoft Azure cloud.

If you can afford some downtime or you are performing a test migration of a production database for later migration, consider one of the following three methods:

 SSMS Migration Wizard: For small to medium databases, migrating a compatible SQL Server 2005 or later database is as simple as running the Deploy Database to Microsoft Azure SQL Database Wizard in SQL Server Management Studio.

 Export to BACPAC file and then Import from BACPAC file: If you have connectivity challenges (no connectivity, low bandwidth, or timeout issues) and for medium to large databases, use this manual method. With this method, you export the SQL Server schema and data to a BACPAC file and then import the BACPAC file into SQL Database using either the Deploy Database to Microsoft Azure SQL Database Wizard in SQL Server Management Studio or the SqlPackage command-prompt utility.

 Use BACPAC and BCP together: Use a BACPAC file and BCP for very large databases to achieve greater parallelization for increased performance, albeit with greater complexity. With this method, migrate the schema and the data separately.

  • Export the schema only to a BACPAC file.
  • Import the schema only from the BACPAC file into SQL Database.
  • Use BCP to extract the data into flat files and then parallel load these files into Azure SQL Database.
Source of Information : Migrating SQL Server Databases to Azure

Determining and resolving Azure SQL Database V12 compatibility issues

Labels:

To export to a BACPAC file, the database must be compatible with Azure SQL Database V12. The export will fail if the database is not compatible. With SQL Database V12, there are very few remaining compatibility issues other than server-level and cross-database operations. Databases and applications that rely on partially supported or unsupported functions will need some reengineering to fix these incompatibilities before the SQL Server database can be migrated. Review these partially supported or unsupported functions before migrating the SQL Database.

The tooling to detect and fix these incompatibilities is improving, but it still requires some manual intervention in many cases. The list below describes the tooling as it currently exists, including new tooling coming out in preview mode in May 2016.

 The Microsoft SQL Server 2016 Upgrade Advisor preview (UA): This stand-alone tool that currently is in preview will detect and generate a report of SQL Database V12 incompatibilities. This tool does not yet have all of the most recent compatibility rules (but will soon). If no errors are detected, you can continue and complete the migration to SQL Database. If errors are detected, you must use another tool to fix any detected compatibility issues. SSDT is the recommended tool.
As mentioned previously, UA will be updated in late April or early May 2016 and become Data Migration Assistant (DMA). DMA will contain the most recent compatibility rules and will, over time, enable you to fix incompatibilities directly in the tool. This functionality is still coming online, so stay tuned.

 SQL Server Data Tools for Visual Studio (SSDT): For Azure SQL Database V12, the best tool today is SSDT because it uses the most recent compatibility rules to detect SQL Database V12 incompatibilities. To use this option, download the newest version of SSDT. If incompatibilities are detected, you can fix detected issues directly in SSDT.

 SqlPackage: SqlPackage is a command-prompt utility that tests for and, if found, generates a report containing detected compatibility issues. If you use this tool, make sure you use the most recent version (follow the link provided) to use the most recent compatibility rules. If errors are detected, you must use another tool to fix any detected compatibility issues. SSDT is the recommended tool.

 The Export Data-tier Application Wizard in SQL Server Management Studio: This wizard will detect and report errors to the screen. If no errors are detected, you can continue and complete the migration to SQL Database. If errors are detected, you must use another tool to fix any detected compatibility issues. SSDT is the recommended tool.

 SQL Azure Migration Wizard (SAMW): For Azure SQL Database V11, the best tool was SAMW. This is a community-supported CodePlex tool that has not been updated fully for Azure SQL Database V12 and generally is not the best method to use to migrate to Azure SQL Database V12.

Source of Information : Migrating SQL Server Databases to Azure

Migrating using full database and transaction log backups to minimize downtime

Labels:

To minimize downtime involved in a migration using a full database backup, you can use a combination of a full database backup, a differential backup (manual method only), and multiple transaction log backups. By using this method, you will keep downtime to a minimum. This method commonly is used with medium to large databases where minimizing downtime is of paramount importance. To test your database in Azure before migrating a production system, either use the full database backup method or use this method to test and verify the migration steps before the actual production migration.

 Manual method: Back up, copy, and then restore (without recovery) a full database backup, a differential backup (optional), and multiple transaction logs while keeping your production system operational. Continue taking and applying transaction log backups without recovery until you are ready to switch over. Apply the final transaction log backup with recovery when you are ready to switch your clients/applications over to connect to the SQL Server instance in the Azure virtual machine rather than the on-premises database.

 AlwaysOn Availability Group method: If you have an on-premises AlwaysOn Availability Group, you can extend the availability group to SQL Server in your virtual machine. Configure your SQL Server instance in the Azure virtual machine as an AlwaysOn replica, seed the replica with a full database backup, keep the replica current with transaction log backups that are applied automatically, and then fail over to the Azure replica when you are ready to switch over your clients to the SQL Server instance in the Azure virtual machine (and turn off AlwaysOn in the source database).

Source of Information : Migrating SQL Server Databases to Azure

Migrating using full database and transaction log backups to minimize downtime

Labels:

To minimize downtime involved in a migration using a full database backup, you can use a combination of a full database backup, a differential backup (manual method only), and multiple transaction log backups. By using this method, you will keep downtime to a minimum. This method commonly is used with medium to large databases where minimizing downtime is of paramount importance. To test your database in Azure before migrating a production system, either use the full database backup method or use this method to test and verify the migration steps before the actual production migration.

 Manual method: Back up, copy, and then restore (without recovery) a full database backup, a differential backup (optional), and multiple transaction logs while keeping your production system operational. Continue taking and applying transaction log backups without recovery until you are ready to switch over. Apply the final transaction log backup with recovery when you are ready to switch your clients/applications over to connect to the SQL Server instance in the Azure virtual machine rather than the on-premises database.

 AlwaysOn Availability Group method: If you have an on-premises AlwaysOn Availability Group, you can extend the availability group to SQL Server in your virtual machine. Configure your SQL Server instance in the Azure virtual machine as an AlwaysOn replica, seed the replica with a full database backup, keep the replica current with transaction log backups that are applied automatically, and then fail over to the Azure replica when you are ready to switch over your clients to the SQL Server instance in the Azure virtual machine (and turn off AlwaysOn in the source database).

Source of Information : Migrating SQL Server Databases to Azure

Migrating using full database backup for simplicity

Labels:

To use the full database backup method with SQL Server, select one of the following mechanisms and do the following:

 Manually back up your source database and then restore it to your SQL Server instance in your virtual machine using SQL Server Management Studio. You can back it up and restore it using one of these methods:

  • Back up to a local file, copy backup to your virtual machine (or to blob storage), and then restore your backup file into your SQL Server instance in your virtual machine. We will walk through this method at the end of this chapter.
  • Backup to URL and then restore from URL. This method is supported only for SQL Server 2012 SP1 CU2 or newer.


 Use the Deploy Database to a Microsoft Azure VM Wizard in SQL Server 2016 SQL Server Management Studio to back up your source database and then restore it to your SQL Server instance in your virtual machine. This method currently is not recommended because this wizard has not yet been updated to support Azure Resource Manager (ARM)–deployed virtual machines.

 Use the Microsoft SQL Server 2016 Upgrade Advisor to back up your source database and then restore it to your SQL Server instance in your virtual machine. As mentioned previously, the SQL Server 2016 Upgrade Advisor currently is in preview, and a new version called Data Migration Assistant (DMA) will be released in late April or early May 2016. As this tool matures, it will become the preferred method. Until Data Migration Assistant is available, this method is not recommended.

You also could detach your data and log files, copy those files into your virtual machine, and then attach those files. However, this method has no advantage over the database backup and restore method.

Source of Information : Migrating SQL Server Databases to Azure

Migrating a SQL Server user database to a SQL Server instance in an Azure virtual machine

Labels:

When migrating a SQL Server user database to a SQL Server instance in an Azure virtual machine, the underlying mechanism generally uses the SQL Server backup functionality (using transactional replication also is an option, and this option is discussed in its own section later in this chapter). The options that use a variation of the SQL Server backup technology are discussed in this section.

Using database backups, you can:
 Use a full database backup. For simplicity, you can perform the user database migration using a SQL Server full database backup. This method is most appropriate for small databases where the downtime is short; for scenarios in which you can afford the amount of downtime required to back up, copy, and restore the database to the SQL Server instance in the Azure virtual machine; and for scenarios in which you just want to test SQL Server in an Azure virtual machine prior to an actual migration.

 Use a combination of full database and transaction log backups to minimize downtime on your production system.

 Use AlwaysOn Availability Groups and create a replica of your existing on-premises user database on your Azure virtual machine (or replicas of a group of user databases that must be migrated together).

To migrate a SQL Server 2000 database to a supported configuration in an Azure virtual machine, you must first upgrade the database to SQL Server 2005 SP4 or greater. When migrating using these methods, encrypt your backups and use encrypted connections to ensure your data is secure during the migration process.

Source of Information : Migrating SQL Server Databases to Azure

Migrating and managing metadata stored outside a user database

Labels:

When you migrate a user database to an Azure environment, metadata that is stored outside the user database generally is not migrated as part of the database migration. In a typical SQL Server environment, an application has dependencies on the master and msdb databases (and sometimes multiple user databases) and on metadata in the user database itself. Anything stored outside a user database that is required for the correct functioning of that database must be made available on the destination server instance, or the application must be re-architected to find that metadata in the user database. For example, the logins for an application traditionally are stored as metadata in the master database, and they must be either re-created on the destination server or re-created as contained users. For issues with orphaned logins, see Troubleshoot Orphaned Users.

Migrating to SQL Server in an Azure VM: If you are migrating a SQL Server database to SQL Server in an Azure virtual machine (Azure VM), you can either migrate all of the user databases using backup and restore or re-create the needed metadata. For more information, see Manage Metadata When Making a Database Available on Another Server Instance (SQL Server). Although not typically done, you could also migrate all of the system databases.

 Migrating to SQL Database: If you are migrating a SQL Server database to SQL Database, you will need to re-architect how your database and application are designed to work within the constraints of SQL Database. For example, SQL Server Agent jobs are not supported in SQL Database. If an application or database maintenance plan depends on SQL Server Agent jobs, you must use other Azure functionality to achieve these tasks, such as elastic database jobs, Azure scheduler, or Azure automation. Although elastic database jobs are still in preview, they generally should be your first option to replace SQL Server Agent jobs, such as using contained users.

Source of Information : Migrating SQL Server Databases to Azure

Managing breaking changes between SQL Server versions

Labels:

When you migrate a database to a newer version of SQL Server or to SQL Database, you may encounter breaking changes from the version of SQL Server on which the database was running. These changes might break applications, scripts, or functionalities that are based on earlier versions of SQL Server. There are two types of breaking changes: changes that will be detected by the migration tool and prevent the migration and changes that are not detected but affect functionality. With respect to changes that affect functionality, running a migrated database at an older compatibility level is a short-term fix, but you will need to run your database at the highest compatibility level possible to take advantage of the newest capabilities of SQL Server or SQL Database.

Before you begin a migration, you should review the Microsoft documentation with respect to any breaking changes if you are migrating from an older version of SQL Server to a newer version of SQL Server. See the following links for details regarding breaking changes:

 SQL Server 2016 Breaking Changes
 SQL Server 2014 Breaking Changes
 SQL Server 2012 Breaking Changes
 SQL Server 2008 R2 Breaking Changes
 SQL Server 2008 Breaking Changes
 SQL Server 2005 Breaking Changes

Microsoft provides an upgrade advisor to help you detect and prepare for a version upgrade. These advisors will examine scripts, stored procedures, triggers, and trace files, but they will not analyze desktop applications or encrypted stored procedures. See the following links for information about how to install and use the upgrade advisor for the version of SQL Server to which you are upgrading and for information about where to find or install the upgrade advisor.

 SQL Server 2016
 SQL Server 2014
 SQL Server 2012
 SQL Server 2008 R2
 SQL Server 2008
 SQL Server 2005

With respect to issues you need to fix due to breaking changes, when you fix them depends on the source and the destination.

If you are migrating from SQL Server on-premises to SQL Server in a virtual machine, you can fix them in the source before the migration—in the source database (perhaps affecting the production system), in an on-premises copy as part of the migration, or after the database is restored. A SQL Server database backup can be restored to a newer version of SQL Server, even with issues that must be changed before the database will function properly (discontinued syntax, for example).

 If you are migrating from a non–SQL Server source database or are migrating to SQL Database, you must fix the issues detected before the migration because the migration mechanisms do not permit you to fix them after the migration.

To fix issues detected, you can use either SQL Server Management Studio or SQL Server Data Tools for Visual Studio (SSDT). With SSDT, you create a database project from the database schema and then modify the schema within the project without affecting the source database. You then can merge the changes back into the source database or to a database copy and then synchronize the data.

The SQL Server 2016 Upgrade Advisor is currently in preview. It is a new and substantially enhanced version, with more features to come soon. In late April or early May 2016, a new version of the SQL Server 2016 Upgrade Advisor will be released and will be named the Data Migration Assistant (DMA), also as a preview. Among the enhancements is the ability not only to detect upgrade issues, but also to fix some issues and to perform the upgrade and migration for you using database backup functionality.

Source of Information : Migrating SQL Server Databases to Azure

Migrating a user database to Azure

Labels:

SQL Server provides a variety of mechanisms to help you migrate some or all of the user objects and data from both Microsoft SQL Server and non-Microsoft SQL Server databases to an Azure environment:

 Migrating from a SQL Server source database: You can migrate from a SQL Server 2005–2016 database to an Azure environment, but you may have compatibility issues to resolve either before the migration can be completed or after the migration but before the database is ready to use.

 Migrating from a recent version of SQL Server to a supported version of SQL Server in an Azure virtual machine will require few if any changes at the user database level.

 Migrating from an older version of SQL Server database to a newer version of SQL Server may have breaking changes that you will have to resolve before, during, or after the migration.

 When you are migrating from a SQL Server database to Azure SQL Database, you may have objects that are not compatible that you will need to fix prior to the migration.

 Microsoft provides a number of mechanisms to minimize downtime during the migration, depending upon both your source environment and your destination.

 Migrating from a non–SQL Server source database: You can migrate from the following non–SQL Server source databases into an Azure environment:

  • Access
  • DB2
  • MySQL
  • Oracle
  • Sybase


When you are migrating from a non–SQL Server database, SQL Server provides mechanisms to convert a substantial percentage of the database objects automatically and will assist you in identifying the objects that cannot be converted, providing you guidance in how to convert those remaining objects. These changes will have to be made either before or during the migration. In most cases, you cannot complete the migration until you have established compatibility with the destination environment.

Regardless of the migration method, you will have to manage metadata that is stored outside the user databases, plan to minimize user downtime, and determine how to redirect users from the old database to the new database.

After the migration is complete, you will want to set the appropriate database compatibility level for the best performance and to take advantage of the newest database capabilities and provide partial backward compatibility with earlier versions of SQL Server until you have converted applications that use deprecated and discontinued functionality. By default, a migrated database retains its existing compatibility if it is at least equal to the minimum supported by the instance to which the database is being migrated.

Finally, how you perform the migration will be affected by your tolerance for downtime in your existing production environment. You will need to plan your migration to handle the amount of downtime required. For example, you may perform an initial migration of a copy of your database, perform the required testing and development work on the copy, and later migrate the production system by using one of the mechanisms discussed later in this chapter to minimize downtime. Or, you may discover that your database requires minimal or no changes and can be migrated during a scheduled maintenance window by using one of the simple migration methods.

Source of Information : Migrating SQL Server Databases to Azure

Service tiers and capabilities

Labels:

SQL Database service tiers also differ based on capabilities related to business continuity and in-memory features.


Business continuity services
All service tiers provide mechanisms to enable a business to continue operating in the face of disruption to its computing infrastructure. The types of potential disruption scenarios from which these mechanisms protect you are:
 Local hardware and software failures, such as disk failure
 Data corruption and deletion, such as application bugs and user errors
 Data center outage, such as a natural disaster
 Upgrade or maintenance errors, such as unanticipated issues during an application upgrade requiring a rollback to a previous state

The next two sections discuss these business continuity mechanisms and how they differ across service tiers.


Automatic database backup and self-service geo-restore services
To protect your data and enable point-in-time restore services, SQL Database automatically takes full database backups every week, multiple differential backups every day, and log backups every five minutes. Backup files are stored in a geo-redundant storage account with read access (RA-GRS) to ensure backups’ availability for disaster recovery purposes in case of catastrophic disaster at any single data center. The first full backup is scheduled immediately after a database is created. After the first full backup, all further backups are scheduled automatically and managed silently in the background. The exact timing of full and differential backups is determined by the system to balance overall load.

SQL Database provides this service for all databases regardless of server tier, but they have different retention periods for the database backups as follows:
 Basic: Any restore point within the past 7 days
 Standard: Any restore point within the past 14 days
 Premium: Any restore point within the past 35 days
SQL Database provides both database copy and point-in-time restore services on top of this automated backup system, which enable you to restore an existing or deleted database to a new database as of a specified point in time, including the current point in time, within the retention period supported by the service tier. This capability is offered within the current data center.

Storage geo-replication provides the ability to restore a database to another data center from the last geo-redundant backup to create a new database in any geographic region. This capability is called geo-restore. The geo-replication of the backup blobs from the automated backups guarantees that daily backups are available even after an outage or massive failure in the primary region.


Active geo-replication
In addition to automated backups and storage geo-replication, SQL Database offers active geo-replication for all databases regardless of service tier. Active geo-replication provides a mechanism to create and maintain asynchronous secondary replicas in up to four regions. These secondary replicas can be either offline or online and readable, and they can be in the same or different data centers.

readable secondary is priced at 1X price of the selected performance level. A non-readable secondary is charged at a discounted rate of 3/4X price of the selected performance level. To ensure that transactions being applied to the secondary do not bottleneck the primary, the secondary must be at the same or higher performance level than the primary. Non-readable secondaries are available for failover if needed due to any of the disaster scenarios discussed above. Readable online secondary replicas can be used for load balancing and/or low-latency read access to different geographic regions. When performing an application upgrade or maintenance operation, the continuous replication can be frozen immediately prior to the upgrade or maintenance operation so that you can easily fall back. In case of a data center outage, you can manually fail over to one of the secondary replicas. You also can design and deploy a small worker role application that monitors your primary database and triggers a failover if necessary. For a discussion of active replication topologies using a combination of readable and non-readable secondaries for both disaster recovery scenarios and application load-balancing scenarios.


In-memory services
In-memory services are available only in the Premium service tier, with different size limits for memory storage based on the performance level within the Premium tier. In-memory services encompass two features: in-memory OLTP and Operational Analytics.

 In-memory OLTP is a feature that can greatly improve OLTP performance by storing selected OLTP tables in memory (5X–20X improvement) and by using compiled stored procedures (100X improvement). In-memory OLTP has been available in SQL Server beginning with SQL Server 2014 and has been enhanced significantly in SQL Server 2016. These enhancements also are available in SQL Database.

 Operational Analytics provides the ability to run both analytics and OLTP workloads in the same database at the same time without a significant degradation in performance. Besides running analytics in real time, you can eliminate the need for loading data into a data warehouse. Operational Analytics is a feature introduced in SQL Server 2016, with the same feature available in SQL Database.

A two-minute video about the benefits of using the in-memory OLTP feature is available at Azure SQL Database - In-Memory Technologies.

Source of Information : Migrating SQL Server Databases to Azure

Service tiers and performance levels

Labels:

SQL Database service tiers differ based on performance levels. Within each service tier, performance levels are designed to have a database operate as if it is running in its own machine, isolated from other databases and services. SQL Database accomplishes this by using a resource governor for certain resources and hard resource limits for other resources.

Each database can have its own service tier with its own limits, independent of any other database. This option is best for databases with a consistent load. You also can place multiple databases in an elastic pool where resources can be shared among databases and minimum and maximum resource thresholds can be set for each database in the pool.

The resource governor enforces a maximum amount of resources for either a single database or an elastic pool of databases. If the aggregated resource utilization reaches the maximum available CPU, memory, log I/O, and data I/O resources assigned to the database, the resource governor will queue queries in execution and assign resources to the queued queries as resources become available.
To assist you in choosing the amount of resources you need for your database, Microsoft developed a unit of measure called a Database Transaction Unit (DTU). To develop this benchmark, Microsoft took a set of operations that are typical for an online transaction processing (OLTP) request and then measured how many transactions could be completed per second under fully loaded conditions within each performance level.

If you are migrating an existing SQL Server database, you can use a third-party tool, the Azure SQL Database DTU Calculator, to get an estimate of the performance level and service tier your database might require in SQL Database.


DTU resource limits
Single database: A single database can have between 5 and 1,750 DTUs. A Basic database has 5 DTUs, which means it can complete 5 “typical” transactions per second, while a Premium P11 database supports 1,750 DTUs.


Elastic pool database: A database can be a member of an elastic pool with between 100 and 1,500 DTUs. All databases within a pool share a common set of resources.


Other resource limits
SQL Database enforces limits on other resources by denying all new requests when limits are reached. These limits include the following:
 Maximum database size: Between 5 GB and 1 terabyte (TB)
 Max in-memory OLTP storage: Between not available and 10 GB
 Max concurrent requests: Between 30 and 2,400
 Max concurrent logins: Between 30 and 2,400
 Max sessions: Between 300 and 32,000
 Max databases using automated export: 10

Source of Information : Migrating SQL Server Databases to Azure

Azure SQL Database

Labels:

SQL Database is an Azure V12 SQL database running within the SQL Server platform service and associated with a logical server. An Azure V12 SQL database is similar in most respects to a SQL Server 2016 database, but there are a few differences. The SQL Server platform service provides the functionality to create and manage your database solution, freeing you up to focus on the solution rather than the underlying network, storage, and compute resources in addition to the SQL Server instance.

When creating your SQL database, you specify the amount of resources available to your new SQL database and associate it with either a new or an existing SQL Database logical server. The features and resource limits available to your new SQL database are determined by the service tier and performance level you select for the SQL database. The SQL Database service manages the resources available to each SQL database using a resource governance mechanism and hard resource limits (both of which are discussed below). A SQL database is portable and can be moved between logical servers.

To secure your SQL Database logical server and database instances, a SQL Database firewall limits connectivity to either Azure services or specific IP addresses. The SQL Database firewall operates at both the logical server and the database level.

Source of Information : Migrating SQL Server Databases to Azure

What is Azure SQL Database (PaaS)?

Labels:

Azure SQL Database is a service at the database level that delivers predictable performance, scalability with no downtime, business continuity, and data protection. Azure SQL Database is designed to deliver predictable database performance with very low levels of administration for performance at a variety of predictable levels. Microsoft automatically configures, patches, and upgrades the database for you. Microsoft provides an availability SLA of 99.99 percent.

In contrast to Azure virtual machines, with SQL Database you are guaranteed a certain level of performance, regardless of usage by other users. Predictable performance in Azure SQL Database is delivered based on service tiers, from Basic to Premium, with different levels of performance and capabilities both within and across tiers to support lightweight to heavyweight database workloads. The amount of performance you get on each tier is represented as a number of DTUs. A DTU is a database transaction unit and represents a combination of compute, database I/O, and memory resources. A certain amount of these resources is guaranteed at each service tier level. Furthermore, there are maximum limits for each performance level for sessions, concurrent logins, concurrent requests, and in-memory OLTP (premium tier feature only). The guaranteed DTU resources, features, and limits per service tier and performance level.

You can build your first app on a small database for a few bucks a month, then change the service tier and performance level manually or programmatically at any time as your app requires resources, with minimal downtime to your app or your customers. You are billed at an hourly fixed rate for outgoing Internet traffic only (not by query) based on the service tier and performance level you choose. The first 5 GB of network traffic per month is free.

Azure SQL Database V12 is based on SQL Server 2016, delivering close compatibility with SQL Server 2016. There is a limited set of features in SQL Server 2016 (and in earlier versions) that are not yet supported in Azure SQL Database. This set of features is shrinking. At the same time, Microsoft has adopted a cloud-first approach to new features in SQL Server—delivering many new features first in Azure SQL Database (in both private and public preview mode) before releasing them to SQL Server 2016 (and to rolling upgrades to SQL Server 2016 that will be coming).

Microsoft combines the power of machine learning and automation with Azure SQL Database to deliver a number of features that are available only in Azure SQL Database. These include:

 Built-in backup: Reduces administration costs when there are large numbers of databases and supports point in time restore, geo-restore, standard geo-replication, and active geo-replication

 Auditing and threat detection: Tracks database events to maintain regulatory compliance, understand database activity, and gain insight into discrepancies and anomalies that could indicate business concerns or security violations

 Index advisor: Recommends and/or automatically adds indexes based on actual query performance and removes added indexes that provide no value

 Query performance insight: Provides insight into resource usage by top CPU consuming queries and the ability to drill down into query details of problematic queries

 Elastic database pools: Enables you to pay for a pool of resources and share them across databases to efficiently spend more for resources as needed

Furthermore, you can manage many aspects of Azure SQL Database, including monitoring its health, directly through the Azure portal dashboard. The following figure shows an Azure SQL Database in the Azure portal.

Source of Information : Migrating SQL Server Databases to Azure

What is SQL Server in an Azure virtual machine (IaaS)?

Labels:

Running SQL Server in an Azure VM is similar to running SQL Server in a virtualized environment in your own data center or in a traditional hosting environment, except that Microsoft provides the hardware on which the virtual machine runs and provides an availability SLA of 99.95 percent. You are in charge of, responsible for, and have either some or complete control of:

 Choosing the amount of compute resources: Many virtual machine size options

 Choosing and configuring storage: Many storage options

 Choosing an operating system: Many versions, including Windows (SQL Server only runs on Windows)

 Choosing the version of SQL Server: Any version of SQL Server

 Choosing SQL Server license model: Choose preinstalled version with per-minute licensing or bring your own license with SA

 Managing the virtual machine: Manage the virtual machine itself via Remote Desktop Protocol (RDP), PowerShell, or CLI

 Managing SQL Server: Manage SQL Server using SQL Server Management Studio or any other SQL tool you wish

 Securing the virtual machine: Manage firewalls and access using standard Windows methods, including choosing to join a domain (directly or using Azure Active Directory)

 Securing SQL Server: Manage authentication and access using standard SQL Server methods

 Optimizing virtual machine performance: Tune the virtual machine using standard virtual machine tuning methodologies

 Optimizing SQL Server performance: Tune the SQL Server instance using standard SQL Server tuning methodologies

 Managing costs: Dynamically scale up and scale down the virtual machine size as computer power is needed, and add or remove disks as storage and throughput is needed

 Patching the operating system: Your responsibility

 Patching SQL Server: Your responsibility, but Microsoft provides tooling to assist

 Backing up SQL Server databases: Your responsibility, but Microsoft provides tooling to assist

 Managing availability: Microsoft ensures the availability of the VM itself within a single data center, but all additional availability is your responsibility with Microsoft providing tooling to assist (see AlwaysOn Availability Groups).

As you can see, with SQL Server on an Azure VM, you have great ability to configure and manage SQL Server in a VM in a manner similar to how you currently manage IT resources in your on-premises environment.

Source of Information : Migrating SQL Server Databases to Azure

Azure services

Labels:

Wikipedia describes Microsoft Azure as “a cloud computing platform and infrastructure, created by Microsoft, for building, deploying and managing applications and services through a global network of Microsoft-managed and Microsoft partner hosted datacenters.” This is a pretty good general description of what Azure provides to you.

Here’s Microsoft’s description of Azure: “a growing collection of integrated cloud services—analytics, computing, database, mobile, networking, storage, and web—for moving faster, achieving more, and saving money.”

Azure allows you to do the following:
 Use the skillsets you already possess and the technologies with which you already are familiar to develop and deploy solutions using SQL Server.

 Work with a wide range of operating systems, programming languages, databases, and devices.

 Integrate Azure with your existing IT environment, including Active Directory for single sign-on.

 Scale up and scale down your Azure services based on demand so you only pay for what you need when you need it.

 Maintain data privacy. Microsoft with Azure services was the first major cloud provider to adopt the new international cloud privacy standard, ISO 27018.

 Encrypt your SQL Server data both at rest and on the wire.

 Have enterprise-grade service-level agreements (SLAs) on services, 24/7 tech support, and round-the-clock service health monitoring.

Finally, you might be asking: Why Microsoft? A good answer is that Microsoft is the only vendor positioned as a Leader across Gartner’s Magic Quadrants for Cloud Infrastructure as a Service, Application Platform as a Service, and Cloud Storage Services for the second consecutive year. Another answer is that Microsoft is the only vendor to offer SQL Server as a service and SQL Server in IaaS. A final answer is that Microsoft is extending the capabilities of SQL Server in Azure at a faster pace than any other vendor.

Source of Information : Migrating SQL Server Databases to Azure

SQL Server in Microsoft Azure

Labels:

SQL Server can be hosted entirely in Microsoft Azure, either in a hosted virtual machine (VM) or as a hosted service. Hosting a virtual machine in Azure is known as infrastructure as a service (IaaS), and hosting a service in Azure is known as platform as a service (PaaS). Microsoft’s hosted version of SQL Server is known as Azure SQL Database or just SQL Database that is optimized for software as a service (SaaS) app development.

SQL Server also can be deployed in a hybrid cloud scenario, extending your on-premises SQL Server environment to utilize various features of the Azure platform:
 SQL Server backup to URL: You can back up your database directly to Azure blob storage or back it up to an on-premises file store and then copy it to Azure blob storage. Using this option can save precious storage space on expensive local storage.

 SQL Server data files in Azure: You can use Azure blob storage for database files for an on-premises instance of SQL Server. Although this option primarily is used with Azure virtual machines, it has its place when developing and testing functionality in some scenarios.

 Stretch SQL Server table to Azure SQL Database: You can stretch an on-premises table to store cold and warm data (older data) in Azure SQL Database while hot or current data (more recent data) remains in the on-premises table—with all of the data being available to query. This option became available in SQL Server 2016 and is a great way to archive infrequently accessed data off your local storage. This option can save money and increase performance for online transactional processing (OLTP) operations on hot data, while the data remains available for analytical queries.

 Transactional replication to Azure SQL Database: You can use transactional replication to replicate data from an on-premises or IaaS SQL Server database to Azure SQL Database. This option is useful to replicate data close to different groups of users to improve query performance and as a prelude to migrating to Azure, enabling you to minimize downtime during migration.

 AlwaysOn Availability Group replica in IaaS: You can configure SQL Server in IaaS as an asynchronous replica of an AlwaysOn Availability Group. This option provides you with a low-cost disaster recovery scenario and can be used as a prelude to migrating to Azure, allowing you to minimize downtime.

Finally, Microsoft offers an additional PaaS service using SQL Server for data warehouse solutions, called Azure SQL Data Warehouse. Azure SQL Data Warehouse is an enterprise-class, distributed database capable of processing massive volumes of relational and non-relational data.

Source of Information : Migrating SQL Server Databases to Azure

Archiving by using the Azure StorSimple appliance

Labels:

Many businesses face situations in which they are required to retain large amounts of data for long
periods of time, often with the expectation that this data will rarely or never be accessed. One way to
accomplish this is through the use of a custom on-premises storage appliance designed to
communicate with the cloud.

The StorSimple storage appliance manages SharePoint, SQL Service, and ordinary file share data, and “ages” out infrequently accessed data to the Azure cloud in a highly efficient (encrypted, compressed) fashion.

In the preceding illustration, you can see the scenarios in which the StorSimple appliance is commonly
used. These include the following

 Archive

  •  Archives and disaster recovery
  • Dramatic cost reduction
  •  No changes to application environment


 File shares

  • File share with integrated data protection
  • All-in-one primary data + backup + live archives + disaster recovery with deduplication and compression


 SharePoint

  • SharePoint storage on StorSimple and Azure
  • StorSimple SharePoint Database Optimizer
  • Improved performance and scalability


 SQL Server

  • Storage for Tier 3 – 3 SQL databases
  • Integrated backup, restore, and disaster recovery


 Virtual environments

  • Control virtual sprawl
  • Cloud-as-a-tier
  • Offload storage footprint
  • VMware Storage DRS Storage pools
  • Virtual Machine Archive
  • Regional VM Storage


The benefits of using a storage appliance that takes advantage of the Azure platform are the consolidation of primary storage with archive, backup, disaster recovery through seamless integration
with Azure, cloud snapshots, deduplication, compression, and encryption. Together, these benefits
result in an average reduction of enterprise storage total cost of ownership (TCO) of 60 to 80 percent.

Design considerations

When planning an Azure archiving deployment using the StorSimple appliance, consider the
following:

 What types of data do you want to save in the cloud using StorSimple? For example, you might choose to use Azure Backup for transactional SQL data if it is highly unlikely that your users will be querying days-old data. Alternatively, you might use StorSimple for data in SharePoint that is only occasionally referenced but needs to be accessible in real time.

 Consider StorSimple for data with legal retention requirements; for example, you must keep the data available for long periods of time.

Source of Information : Microsoft Enterprise Cloud Strategy

Azure Site Recovery

Labels:

Perhaps the greatest fear any IT manager has is downtime of the entire IT environment. When an enterprise IT environment fails—due to, for example, an on-premises datacenter outage—the results
can be catastrophic for a business. Moreover, when the outage is remediated, systems often must be brought back online in a particular order to smoothly restore operations.

The cloud presents a number of new opportunities for enabling business continuity and disaster recovery. (Data backup and recovery was discussed in the previous section.) Azure Site Recovery makes it possible for workloads to be quickly replicated to Azure, and to be restored in an orderly
fashion using Orchestrated Disaster Recovery as a Service (DRaaS). Enterprise IT professionals can
create recovery plans, dictating specifics such as which workloads must be run first, or only running a
workload upon the successful completion of an integrity check.

Using Microsoft System Center Virtual Machine Manager, you can replicate virtual machines (VMs) as well as physical servers to the Azure cloud under the control of site-defined policies. You can also
“burst” workloads to Azure when surges occur. Microsoft System Center Operations Manager will also monitor the operation of the on-premises systems from Azure, ensuring that failures are detected and managed as quickly as possible.

When planning a Site Recovery deployment, consider the following:
 What are your Recovery Time Objective (RTO) and Recovery Point Objective (RPO)? RTO is the
time desired to bring an application or ecosystem back online; RPO describes the state of the
data after the system is recovered.

 Which systems must you bring back up first, and which systems depend on others before you
can restart them? For example, it might be necessary for the database server to be online
prior to starting a web server or SharePoint system. Knowing this sort of information will aid
in building a recovery plan.

Source of Information : Microsoft Enterprise Cloud Strategy

Using the cloud for data backup and recovery

Labels:

Data backup and replication is one of the most common and straightforward uses of the cloud in hybrid scenarios. Cloud storage is relatively inexpensive and, for all intents and purposes, unlimited,
and these facts open up a number of useful application scenarios. In the next few sections, we will examine several such scenarios in which Azure storage complements on-premises assets.


Azure Backup
Backup, of course, although unheralded, is one of the most important functions that any IT department performs. In many cases, compliance or other legal requirements force businesses to retain data for long periods of time. Traditionally, backup requires secondary media, a secure location
to store backups, and a set of operational procedures to both carry out the backup process and recover the data in the event of a catastrophe.

By employing easy-to-use tools and the inexpensive storage available in the cloud, you can augment
or replace existing backup mechanisms with Azure Backup.

Azure Backup operates in a hybrid model, utilizing a VPN tunnel to connect onsite resources to the
Azure cloud. You can then use Microsoft System Center Data Protection Manager, enhanced for the
cloud, to backup and restore data. Azure Backup will retain data for up to 99 years with 99.9% availability. Backed-up data is secure and encrypted, and other features such as data compression and
bandwidth throttling ensure optimum use of IT and network resources.


When designing your IT ecosystem to take advantage of Azure Backup, think about the following:

 Which applications would most benefit from offsite backup? This will help you to prioritize the
applications to which you should deploy Azure Backup first.

 How much data do these applications maintain? This will help you to properly size the offsite
storage.

Source of Information : Microsoft Enterprise Cloud Strategy

Hybrid cloud connectivity

Labels:

In a hybrid cloud, some applications are hosted on-premises, whereas others reside in the cloud. Ideally, where these applications live is transparent to end-users. In other words, cloud-resident applications should appear to be within the on-premises network, with appropriate IP addressing and
routing. Applications in the cloud are configured to be in the same IP range as those in the datacenter
through the Microsoft Azure portal.

There are a number of approaches to achieving this type of location transparency. This section describes four separate ways to connect a datacenter to Azure:

 Point-to-site connectivity
 Site-to-site connectivity
 Azure ExpressRoute (via an Exchange Provider)
 ExpressRoute (via a Network Service Provider)

The choice you make will depend on the how you calculate the bandwidth/cost tradeoff; the need, or
not, to be isolated from the open Internet; and how geographically dispersed your sites are.


Point-to-site
Using the Internet, you can create such a virtual private network (VPN) in two ways. The first is called point-to-site connectivity, in which the VPN is configured through software on individual client computers in the datacenter. The least expensive of all the options, point-to-site connections are useful when only a few machines on-premises need connectivity to the cloud, or when the connection is from a remote or branch office.


Site-to-site
Another approach is called site-to-site connectivity. In this configuration, a datacenter deploys a hardware VPN gateway to link the on-premises datacenter in its entirety with applications and data in
the cloud. The hardware gateway must have a public-facing IP address and a technician must be
available to perform the configuration.


ExpressRoute via an Exchange provider
When it comes to accessing their cloud applications, many enterprises want configurable and deterministic network latency. They might also want their network traffic isolated from the public
Internet. To support these requirements, a direct connection from the datacenter to Azure using a
partner telecommunications carrier, called ExpressRoute, is provided, as depicted in the illustration that follows. Although this is potentially a more expensive solution, ExpressRoute provides the fastest connectivity as well as isolation from the Internet, essentially by connecting via a “dedicated line.” A full list of supported telecom providers for ExpressRoute is available on the Microsoft website at https://azure.microsoft.com/en-us/documentation/articles/expressroute-locations/.


ExpressRoute via network service provider
In addition, it is possible to connect through a telecom network service provider such that Azure
simply appears as another site on the enterprise’s wide area network. As with the previous approach, by using a telecom provider as the transport, you can negotiate bandwidth with the provider and, of course, network isolation is provided. You will need to work with your telecom provider to find the best approach for your organization.

Source of Information : Microsoft Enterprise Cloud Strategy

Integration

Labels:

Of course, no application in enterprise IT exists as an island; every application communicates in one
form or another with others. Applications can receive or send real-time updates to others through direct messaging, queues, or publish-and-subscribe techniques; can receive events from external
sources such as sensors; and/or can receive bulk, batch updates (“extract, transform, and load” or ETL) from others.

Azure Service Bus is a messaging system in the cloud for connecting applications, services, and devices to one another through a variety of protocols, including topic-based, message-based, and
publish-and-subscribe. Service Bus supports a variety of protocols (REST, AMQP, WS-*), and you can use it to connect cloud applications to one another and to on-premises applications, as well.

Azure Event Hubs provides a massively scalable event ingestion service. Also supporting a variety of protocols, Event Hubs can scale out to support thousands, millions, or even billions of events per day
and is designed for small or very large IoT applications.

Azure Logic Apps gives developers a means to quickly create applications in a stepwise fashion by connecting applications such as SQL Database or Twitter visually. With Logic Apps, you can rapidly and graphically develop workflow apps with connectors and triggers.

EDI (Electronic Data Interchange) is one of the oldest data integration standards, and its use in electronic commerce is widespread. Azure BizTalk Services provides a cloud-based means for connecting EDI applications together with support for EDI, X.12, EDIFACT, and AS2.

Of course, this list of services does not represent the full panoply of capabilities available. We encourage you to frequently review the Microsoft Azure website (www.microsoftazure.com) for updates, new features, and new services.

Source of Information : Microsoft Enterprise Cloud Strategy

Analysis

Labels:

Flexibility and a variety of options also characterize analytics in the cloud. There are many new
possibilities with analytical analysis, but the most exciting possibility is predictive analytics; that is, the ability to predict future performance from historical data. The possibilities are almost limitless, but consider the use of Big Data for predictive analysis in the following:

 Churn analysis
 Social-network analysis
 Recommendation engines
 Location-based tracking and services
 IT infrastructure and web application optimization
 Weather forecasting for business planning
 Legal discovery and document archiving
 Equipment monitoring
 Advertising analysis
 Pricing analysis
 Fraud detection
 Personalized everything

The tools available in the cloud to help you with analytics include HDInsights, Machine Learning, Stream Analytics, Search, and business intelligence (BI) and reporting. With these tools you can explore how to turn today’s noise into tomorrow’s insights.

Azure HDInsight is a Hadoop-based service that brings an Apache Hadoop solution to the cloud. You can use it for managing data of any type and any size. HDInsight can process unstructured or semi-structured data from web clickstreams, social media, server logs, devices and sensors, and more. This makes it possible for you to analyze new sets of data that uncovers new business possibilities to drive your organization forward.

Azure Machine Learning offers a streamlined experience for all data-science skill levels, from setting up with only a web browser to using drag-and-drop gestures and simple data flow graphs to set up
experiments. Machine Learning Studio features a library of time-saving sample experiments, R and Python packages, and best-in-class algorithms from Microsoft businesses like Xbox and Bing. Azure
Machine Learning also supports R and Python custom code, which you can drop directly into your
workspace and share across your organization.

In this case the experiment is to ingest data about cars, including engine size, mileage, size of the car,
and so on, and then to train a model to predict a car’s price based on those parameters. Data scientists can pick from any one of many prebuilt algorithms to train the model, or they can supply a custom R module. At the end of the training, a Web Service can then be created (see the bottom panel of the screen) that can subsequently be used in an application in which an end user supplies data and Azure Machine Learning provides a predicted price.

Azure Stream Analytics gives you the ability to rapidly develop and deploy a low-cost, real-time analytics solution to uncover real-time insights from devices, sensors, infrastructure, and applications. It opens the door to various opportunities, including IoT scenarios such as real-time remote management and monitoring or gaining insights from devices like mobile phones or connected cars. Stream Analytics provides out-of-the-box integration with Azure Event Hubs to ingest millions of events per second. Stream Analytics will process ingested events in real time, comparing multiple realtime streams or comparing real-time streams together with historical values and models. You can use this to detect anomalies, transformation of incoming data, to trigger an alert when a specific error or condition appears in the stream, and to power real-time dashboards.

Azure Search is a fully managed cloud service with which developers can build rich search applications by using.NET SDK or REST APIs. It includes full-text search scoped over your content, plus advanced search behaviors similar to those found in commercial web search engines, such as typeahead query suggestions based on a partial term input, hit-highlighting, and faceted navigation.
Natural language support is built in, using the linguistic rules that are appropriate to the specified
language.

Search is an API-based service for developers and system integrators who know how to work with
web services and HTTP. Search takes the complexity out of managing a cloud search service and
simplifies the creation of search-based web and mobile applications.

Azure BI and Reporting; The Microsoft Azure Virtual Machine gallery includes images that contain SQL Server installations that you can use to easily set up SQL Server Reporting Services on the cloud. You can create an Azure Virtual Machine that runs Microsoft SQL Server Business Intelligence (BI) features and Microsoft SharePoint 2013. Microsoft PowerBI (at http://www.powerbi.com) is a SaaS application with which you can quickly build visually appealing, interactive dashboards. An ever-increasing number of connectors gives you the ability to bring data from cloud data sources and other Microsoft and third-party SaaS applications into PowerBI. Whereas data is dramatically increasing in volume, velocity, and diversity, actionable analytics is
challenging. You need interoperable tools and systems to maximize your existing investments in analytics, and provide the flexibility to evolve on your own terms.

Source of Information : Microsoft Enterprise Cloud Strategy

NoSQL (nonrelational) storage

Labels:

The NoSQL arena has many options, ranging from simple object storage to complex document and
graph-based data stores.

Azure Storage has several component features that provide the flexibility to store and retrieve large
amounts of unstructured data such as documents and media files with Azure Blobs, structured
NoSQL-based data with Azure Tables, reliable messages with Azure Queues, and Server Message
Block (SMB) protocol file shares with Azure File Service. The following is a brief look at how to
differentiate each of these component features:

 Azure Blob Storage is designed to store data in essentially any format. Blobs (which stands for
“Binary Large Object” and is somewhat of a misnomer, given that text is equally at home in a
blob) are analogous to files on a server or client machine. Blobs can hold text, images, media,
comma-separated-value (CSV) files, databases—virtually anything. Like Azure Tables (described next), blobs are triple redundant, meaning their contents are always replicated to two other physical stores in Azure, thus minimizing the possibility of data loss in the event of hardware failure.

 Azure Tables provides a simple but performant key/value store. Azure Tables lets an application
store properties of various types, such as strings, integers, and dates. An application can then
retrieve a group of properties by providing a unique key for that group. Although complex
operations such as joins aren't supported, tables offer fast access to typed data. They're also very
scalable, with a single table able to hold as much as a terabyte of data. And, matching their
simplicity, tables are usually less expensive to use than relational storage. You should consider Azure Tables if you need to create an Azure application that needs fast access to typed data, maybe lots of it, but does not need to perform complex SQL queries on this data. For example, imagine that you're creating a consumer application that needs to store customer profile information for each user, and you expect to have a large number of users, but you won't do much with this data beyond storing it and retrieving it in simple ways. This is the kind of scenario for which Azure Tables makes sense.

 Azure Queues is a service for storing large numbers of messages that users can access from anywhere in the world via authenticated calls by using HTTP or HTTPS. A single queue message
can be up to 64 KB in size, and a queue can contain millions of messages, up to the total capacity
limit of a storage account. Common uses for Queues storage include creating a backlog of work
to process asynchronously and passing messages from an Azure Web role to an Azure Worker
role.

 Azure Files provides file storage accessible through the SMB protocol using a \\Server\share format. Applications running in Azure can use it to share files between VMs using familiar file
system APIs like ReadFile and WriteFile. In addition, the files can also be accessed at the same
time via a REST interface, which allows you to access the shares from on-premises when you also
set up a virtual network. Azure Files is built on top of the Blob service, so it inherits the same
availability, durability, scalability, and geo-redundancy that’s built in to Storage.

There are several common scenarios for which Azure Files would be a good storage solution:

 Migrating existing apps to the cloud It’s easier to migrate on-premises applications to the cloud that use file shares to share data between parts of the application. Each VM connects to the file share, and then it can read and write files just like it would against an on-premises file share.

 Shared application settings A common pattern for distributed applications is to have configuration files in a centralized location where they can be accessed from many different VMs. You can store these configuration files in an Azure File share where they can be read by all application instances. You can also manage the settings via the REST interface, which allows worldwide access to the configuration files.

 Diagnostic Share You can save and share diagnostic files such as logs, metrics, and crash dumps. Having these files available through both the SMB and REST interface makes it possible for applications to use a variety of analysis tools for processing and analyzing the diagnostic data.

 Dev/Test/Debug When developers or administrators are working on VMs in the cloud, they often need a set of tools or utilities. Installing and distributing these utilities on each VM is time consuming. With Azure Files, developers or administrators can store their favorite tools on a file share and connect to them from any VM. Azure DocumentDB is a NoSQL document database service designed from the ground up to natively support JavaScript Object Notation (JSON) directly inside the database engine. It’s the right solution for applications that run in the cloud when predictable throughput, low latency, and flexible query are crucial.

A common problem for developers is that application schemas constantly evolve. DocumentDB automatically indexes all JSON documents; adds them to the database, including all fields; and lets
you use familiar SQL syntax to query them without specifying schema or secondary indices up front.

Other data options include a wide range of relational and nonrelational applications from other vendors, including IBM and Oracle, as well as popular open-source packages such as MongoDB and others. The Azure Marketplace provides a convenient way for architects and applications developers to find and try out different options.

Source of Information : Microsoft Enterprise Cloud Strategy


Relational databases in the cloud

Labels:

There are many options for relational database functionality in the cloud, and they serve different
purposes.

SQL Server in Azure Virtual Machines gives you the ability to migrate existing databases to an IaaS infrastructure. When using SQL Server in a VM, you can either bring your own SQL Server license to Azure or use one of the preconfigured SQL Server images in the Azure preview portal. SQL Server on  VM is optimized for extending existing on-premises SQL Server applications to Azure in a hybrid scenario, deploying an existing application to Azure in a migration scenario, or creating a development/test scenario. An example of the hybrid scenario is keeping secondary database replicas in Azure via Azure Virtual Network. With SQL Server in Virtual Machines, you have the full administrative rights over a dedicated SQL Server instance and a cloud-based VM. It is a perfect choice when an organization already has IT resources available to maintain the VMs. With SQL Server in Virtual Machines, you can build a highly customized system to address your application’s specific performance and availability requirements. SQL Server running in Virtual Machines is perfect when your existing and new applications require access and control to all features of a SQL Server instance, and when you want to migrate existing onpremises applications and databases to the cloud as-is. Because you do not need to change the presentation, application, and data layers, you save time and budget on redesigning your existing solution. Instead, you can focus on migrating all your solution packages to the VMs and doing some performance optimizations required by the Azure platform. You can run many other commonly used relational databases such as Oracle or IBM’s DB2 as VMs in the cloud.


SQL Database is a relational database-as-a-service that makes it possible for you to create new
applications using database services in a PaaS environment. With SQL Database, you can develop
directly on the service by using built-in features and functionality. When using SQL Database, you pay as you go, with options to scale up or out for greater power. SQL Database is optimized to reduce
overall costs to the minimum for provisioning and managing many databases because you do not
need to manage any VMs, operating system, or database software including upgrades, high
availability, and backups. SQL Database is the right solution for cloud-designed applications when developer productivity and fast time-to-market are critical. With programmatic DBA–like functionality, it is perfect for cloud architects and developers because it lowers the need for managing the underlying operating system and database. It helps developers understand and configure database-related tasks. For example, you can use the REST API and PowerShell cmdlets to automate and manage administrative operations for thousands of databases. With elastic scale in the cloud, you can easily focus on the application layer and deliver your application to the market faster.

Source of Information : Microsoft Enterprise Cloud Strategy

Storage

Labels:

Only a short few years ago storage options were relatively limited; but recently, both the number and
the types of available storage have exploded. Whereas relational databases continue to support the
highest data integrity and high performance, the table-based metaphor of relational database
management systems (RDBMS’s) has been augmented by other forms of storage including binary large objects (BLOBs), simple key-value tables, document databases, graph-based databases, so-called “Big Data” storage, and others. Application developers have a wide variety of data stores to choose from; increasingly, many cloud-based applications use a combination of several types of storage, a pattern which has been termed polyglot persistence.

For example, a typical e-commerce application will usually have a transactional relational database to
track purchases and sales because an RDBMS of this sort has superb integrity; its transactions are
ACID (atomic, consistent, isolated, and durable). However, these capabilities are not required for every aspect of the application: a simple file will suffice for maintaining logs, a Hadoop-based system can be used for log analysis, a document database can be used for maintaining unstructured data such as a product catalog, and so on. These “NoSQL” data stores can support transactions but typically are not as robust (for this purpose) as a relational database. On the other hand, they are ideal for quickly
deriving insights from very large amounts (petabytes or more) of unstructured data, and the cloud
offers vast quantities of storage space that are typically very expensive for enterprises to provide onpremises.

Source of Information : Microsoft Enterprise Cloud Strategy

Containers and microservices

Labels:

A new technology that has emerged in the past year is called containers. This refers to the ability to
create an application that runs with a strictly defined subset of operating system resources and is fully
isolated from other applications and from the operating system. Pioneered by Docker,14 containers are highly portable across environments ranging from on-premises bare-metal systems through cloud
environments. Docker containers now run in most cloud environments, including Azure.

Another development in cloud computing is the emergence of the actor model. Actors, which are
small, highly concurrent objects, are actually a relatively old idea first introduced in the 1970s. A niche technology until recently, actors have found great application in gaming and in Internet of Things (IoT) scenarios, in which a very large number of small objects—representing (for example) users or sensors that require the ability to communicate with one another—make up the application. For example, a small piece of code might control the operation of a valve in a pipeline autonomously: and there can be thousands of such things, each of which operates and reports status to some central
database.

Finally, so-called microservices, another recent development, are a software methodology that proposes that cloud applications are made up of many independently maintained components. So, an
e-commerce application might comprise a catalog microservice, a payment microservice, a payments
microservice, and so on, all connecting through APIs. A fabric management system (such as the
forthcoming Microsoft Service Fabric) controls the concurrency and maintains the state of the
microservices, providing a new form of application where logic and state are in the same layer,
providing a more distributed and scalable application than the typical three-tier web-application
design. This means, for example, that middle tiers, traditionally stateless because of scale limitations,
can now be stateful, scaling out across the cloud as needed.

Source of Information : Microsoft Enterprise Cloud Strategy

Platform as a Service architecture

Labels:

So far, we’ve have talked primarily about migrating applications in a fairly simplistic way; that is, by
simply copying VMs from an on-premises datacenter to the cloud provider—the IaaS model. Of
course, IaaS carries with it a number of advantages, such as passing responsibility for the datacenter
to the cloud provider. To really transform to a cloud-centric model, designing applications specifically for the cloud is the next step.

IaaS has certain limitations: you are still responsible for maintaining the system software, operating
system, and database for your application, including items such as periodic patches and software
upgrades. In fact, we can say that IaaS is only the first step in fully taking advantage of the cloud.
In Platform as a Service (PaaS) models, you only need to maintain your application, whereas the
system software is provided by and maintained by the cloud provider. In addition, PaaS offerings
typically add seamless scalability and resiliency by providing scale-out and data replication, and PaaS
can interact with cloud services such as Microsoft Azure Active Directory for robust identity
management.

Azure App Service Web Apps, for example, provides a way to rapidly provision a scalable website in
the cloud with a minimum of effort. Microsoft provides the underlying web infrastructure (operating
system, networking stack, storage, language support, and scalability features) that remove much of
the systems overhead of managing a large-scale web application. It is straightforward to configure scalability, backup, and monitoring capabilities into a Web Apps application. Web Apps also connects
to all the other services offered by the cloud for rich applications.

Azure Cloud Services are a cloud analog to the “three-tier” line of business applications of a decade
ago. In Cloud Services, an application consists of three components: a web role, effectively a web front end, scalable independently from other parts of the application; a worker role, providing background computation and processing (analogous to the business logic layer in the three-tier model); and persistent storage using an Azure-enabled version of SQL Server (Azure SQL Database). Although it requires some redesign to take an existing application to Cloud Services, this will be relatively straightforward because the model is intentionally similar to three-tier.

Source of Information : Microsoft Enterprise Cloud Strategy

Alltop, all the top stories
BlogMalaysia.com
All Malaysian Bloggers Project
Computer Blogs - BlogCatalog Blog Directory Add to Technorati Favorites
Technorati Profile
Top Computers blogs