VNET Service Endpoints for Azure SQL & Storage

internet-1676139_640It’s finally here, it has arrived:  Azure Virtual Network Service Endpoints.<p>This was a long requested “Enterprise feature”.</p><p>Let’s look at what this is and how to use it.</p><p>Please note that at the time of this writing (end-of-September 2017) this feature is available only in a few region in Public Preview:</p><ul><li>Azure Storage: WestCentralUS, WestUS2, EastUS, WestUS, AustraliaEast, and AustraliaSouthEast </li><li>Azure SQL Database: WestCentralUS, WestUS2, and EastUS</li></ul><h2>Online Resources</h2><p>Here is a bit of online documentation about the topic:</p><ul><li>Service Launch blog</li><li>Virtual Network Service Endpoints</li><li>How to Configure Virtual Network Service Endpoints</li><li>Virtual Network Service Endpoints with Azure SQL</li><li>Virtual Network Service Endpoints with Azure Storage</li><li>Service Tags</li></ul><h2>The problem</h2><p>The first (historically) Azure Services, e.g. Azure Storage & Azure SQL, were built with a public cloud philosophy:</p><ul><li>They are accessible through public IPs</li><li>They are multi-tenant, e.g. public IPs are shared between many domain names</li><li>They live on shared infrastructures (e.g. VMs)</li><li>etc.</li></ul><p>Many more recent services share many of those characteristics, for instance Data Factory, Event Hub, Cosmos DB, etc.  .</p><p>Those are all Platform as a Service (PaaS) services.</p><p>Then came the IaaS wave, offering more control and being less opinionated about how we should expose & manage cloud assets.  With it we could replicate in large parts an on premise environment.  First came Virtual Machines, then Virtual Networks (akin to on premise VLANs), then Network Security Groups (akin to on premise Firewall rules), then Virtual Network Appliances (literally a software version of an on premise Firewall), etc.  .</p><p>Enterprises love this IaaS as it allows to more quickly migrate assets to the cloud since they can more easily adapt their governance model.</p><p>But Enterprises, like all Cloud users, realize that the best TCO is in PaaS services.</p><p>This is where the two models collided.</p><p>After we spent all this effort stonewalling our VMs within a Virtual Network, implementing access rules, e.g. inbound PORT 80 connections can only come from on premise users through the VPN Gateway, we were going to access the Azure SQL Database through a public endpoint?</p><p>That didn’t go down easily.</p><p>Azure SQL DB specifically has an integrated firewall.  We can block all access, leave only IP ranges (again good for connection over the internet) or leave “Azure connections”.  The last one look more secure as no one from an hotel room (or a bed in New Jersey) could access the database.  But anyone within Azure, anyone, could still access it.</p><p>The kind of default architecture was something like this:</p><p>image</p><p>This did put a lot of friction to the adoption of PaaS services by Enterprise customers.</p><h2>The solution until now</h2><p>The previous diagram is a somewhat naïve deployment and we could do better.  A lot of production deployments are like this though.</p><p>We could do better by controlling the access via incoming IP addresses.  Outbound connections from a VM come through a Public IP.  We could filter access given that IP within the PaaS Service integrated Firewall.</p><p>In order to do that, we needed a static public IP though.  Dynamic IP preserves their domain name but they aren’t guaranteed to preserve their underlying IP value.</p><p>image</p><p>This solution had several disadvantages:</p><ul><li>It requires a different paradigm (public IP filtering vs VNET / NSGs) to secure access</li><li>It requires static IPs</li><li>If the VMs were not meant to be exposed on the internet, it adds on configuration and triggers some security questions during reviews</li><li>A lot of deployment included a “force tunneling” to the on premise firewall for internet access ; since the Azure SQL DB technically is on the internet, traffic was routed on premise, increasing latency substantially</li></ul><p>And this is where we were at until this week when VNET Service Endpoints were announced at Microsoft Ignite.</p><h2>The solution</h2><p>The ideal solution would be to instantiate the PaaS service within a VNET.  For a lot of PaaS services, given their multi-tenant nature, it is impossible to do.</p><p>That is the approach taken by a lot of single-tenant PaaS services though, e.g. HD Insights, Application Gateway, Redis Cache, etc.  .</p><p>For multi-tenant PaaS where the communication is always outbound to the Azure service (i.e. the service doesn’t initiate a connection to our VMs), the solution going forward is VNET Service Endpoints.</p><p>At the time of this writing, only Azure Storage, Azure SQL DB & Azure SQL Data Warehouse do support that mechanisms.  Other PaaS services are planned to support it in the future.</p><p>VNET Service Endpoints does the next best thing to instantiating the PaaS service in our VNET.  It allows us to filter connections according to the VNET / Subnet of the source.</p><p>This is made possible by a fundamental change in the Azure Network Stack.  VNETs now have identities that can be carried with a connection.</p><p>So we are back to where we wanted to be:</p><p>image</p><p>The solution isn’t perfect.  For instance, it doesn’t allow to filter for connections coming from on premise computers via a VPN Gateway:  the connection needs to be initiated from the VNET itself for VNET Service Endpoints to work.</p><p>Also, the access to PaaS resources is still done via the PaaS public IP so the VNET must allow connection to the internet.  This is mitigated by new tags allowing to target some specific PaaS ; for instance, we could allow traffic going only to Azure SQL DBs (although not our Azure SQL DB instance only).</p><p>The connection does bypass “force tunneling” though and therefore the traffic stays on the Microsoft Network thus improving latency.</p><h2>Summary</h2><p>VNET Service Endpoints allow to secure access to PaaS services such as Azure SQL DB, Azure SQL Data Warehouse & Azure Storage (and soon more).</p><p>It offers something close to bringing the PaaS services to our VNET.</p>

5 responses

  1. Mon Burayag 2018-05-09 at 16:45

    With NSG allowing access to Azure SQL PaaS using tag and port 1433, you can disable the Service Endpoint and you still connect from IaaS VM to PaaS SQL. In other words, the SQL Service Endpoint is not really helping to secure communication. It is useless I should say. You can’t use SQL Service Endpoint WITHOUT allowing the Azure VM to go to public internet. It would have been better if everything will happen without vNet\Subnet and SQL Serivce Endpoint only. But it’s not.

  2. Mon Burayag 2018-05-09 at 16:47

    It would have been better if everything will happen WITHIN vNet\Subnet and SQL Serivce Endpoint only. But it’s not.

  3. Vincent-Philippe Lauzon 2018-05-10 at 06:24

    You do not need to allow internet access. You can use “Service Tag” as Destination and then “SQL.EastUS” (for example) as “Destination Service Tag”.

    That is one of the goal of the Service Endpoint feature to not require outbound internet access.

  4. Okan Aslaner 2019-06-26 at 20:49

    The last diagram shows that access from the Internet to the Azure SQL db is not allowed. If the user is trying to connect to the Azure SQL db using SSMS, the user will be asked if he/she wants to add the client IP to the whitelist of the firewall. The user will be able to add his/her client IP if they have contributor level access to the Azure SQL resource. At that point, if they know the username/password for the db, they will be able to get in.

  5. Vincent-Philippe Lauzon 2019-06-27 at 05:42

    Yes… but they need to be admin on the SQL resource. It’s equivalent to opening the Azure Portal and changing the Firewall rules.

    There are 2 Network Access control on Azure SQL (and most PaaS services): Firewall & Service Endpoint. The former control access of Public IPs, the latter, of private IPs.

    In the setup we shown in this diagram, we do not use the Firewall. Hence we disable all public IP access. We then only allow subnets using Service Endpoint.

    The procedure you mention is SSMS allowing you to configure the Firewall to authorize client public IP. Doing so, you break the setup…

    So, yes, you are correct, but you need to be contributor on the resource to do so and yes, you can break the configuration when you are contributor… But the setup is still secure if no one tempers with it.

    In a typical production environment, developers / DBAs wouldn’t be contributor on the resource and hence wouldn’t be able to do that.

Leave a comment