Quantcast
Channel: Microsoft Azure Cloud Integration Engineering
Viewing all 35 articles
Browse latest View live

Windows Azure : ACS with ADFS2.0

$
0
0

Recently I was dealing with a case where the site say https://www.test.contoso.com/ which was working fine on a Windows Azure Web Role leveraging ACS for authentication started failing with the following error.

 

 Server Error in '/' Application.

--------------------------------------------------------------------------------  

ID4175: The issuer of the security token was not recognized by the IssuerNameRegistry. To accept security tokens from this issuer, configure the IssuerNameRegistry to return a valid name for this issuer.

Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.   

Exception Details: System.IdentityModel.Tokens.SecurityTokenException: ID4175: The issuer of the security token was not recognized by the IssuerNameRegistry. To accept security tokens from this issuer, configure the IssuerNameRegistry to return a valid name for this issuer.  

Source Error:    

An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.     

Stack Trace:   

[SecurityTokenException: ID4175: The issuer of the security token was not recognized by the IssuerNameRegistry. To accept security tokens from this issuer, configure the IssuerNameRegistry to return a valid name for this issuer.]

Microsoft.IdentityModel.Tokens.Saml11.Saml11SecurityTokenHandler.CreateClaims(SamlSecurityToken samlSecurityToken) +739

Microsoft.IdentityModel.Tokens.Saml11.Saml11SecurityTokenHandler.ValidateToken(SecurityToken token) +628

Microsoft.IdentityModel.Tokens.SecurityTokenHandlerCollection.ValidateToken(SecurityToken token) +117

Microsoft.IdentityModel.Web.TokenReceiver.AuthenticateToken(SecurityToken token, Boolean ensureBearerToken, String endpointUri) +151

Microsoft.IdentityModel.Web.WSFederationAuthenticationModule.SignInWithResponseMessage(HttpRequest request) +583

Microsoft.IdentityModel.Web.WSFederationAuthenticationModule.OnAuthenticateRequest(Object sender, EventArgs args) +500

System.Web.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +80

System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +270

 

I looked at the portal and found that ACS in this case was leveraging an ADFS2.0 server as the STS(Secure Token Service). I did some research was quickly and found that the error is more of an ADFS error than ACS error since the error code doesn't have ACS rather has ID in the starting if the error code. I did some research on the issue and found most of them indicate that the thumbprint in my web.config is wrong and may not be matching that of the local certificate.

Snippet from Web.Config

 <issuerNameRegistry type="Microsoft.IdentityModel.Tokens.ConfigurationBasedIssuerNameRegistry, Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35">

 <trustedIssuers>

 <add thumbprint="68DFAO902313WBD2F76797B27396F2AAE625AF95A6" name="TestSTS"/>

 </trustedIssuers>

 </issuerNameRegistry>

Certificate

  a. Click StartRun, type MMC.exe, and press Enter
  b. Click FileAdd/Remove Snap-in
  c. Double-click Certificates
  d. Select Computer account and click Next
  e. Select Local computer and click Finish
  f.  Expand Certificates (Local Computer), expand Personal, and select Certificates

But both the values were the same and hence there is no mismatch. Based on some further research I found that in ADFS 2.0, AutoCertificateRollover is enabled by default if the installation is done using GUI.

http://social.technet.microsoft.com/wiki/contents/articles/ad-fs-2-0-how-to-enable-and-immediately-use-autocertificaterollover.aspx

Excerpt from the article  

"When the GUI Initial Configuration Wizard (ICW) of AD FS 2.0 has been executed, AutoCertificateRollover is automatically enabled by default and the token-signing and token-decrypting certificates are self-signed and maintained by the AD FS 2.0 service."

So it may be likely that the Certificate has been autorolledover and hence the thumbprint in the Web.Config doesn't match that of the certificate in the ADFS2.0 STS server even though it matches that of the local copy.

I requested the customer to follow-up with the STS (Secure Token Service) provider they redirect for authentication and check if the certificate in the ADFS2.0 has been rolled over. Not to my surprise it was and hence the login validation was failing. We got the new certificate and used the thumbprint in it added to the Web.Config (please type the thumbprint and don’t do a copy paste as there might be invisible characters which might get copied). Now the redirection happens successfully and the error doesn’t show up.

Angshuman Nayak, Cloud Integration Engineer, Microsoft


ID1113: The ACS service namespace: ‘’ and management key combination is invalid

$
0
0

 

In Visual Studio 2012 when you configure ACS reference via “Identity and Access” for your application you are required to configure the ACS namespace with management key

 

image

 

Note: You can get the management key from ACS Management portal –> Administration –> Management Service –> Management Client –> Symmetric Key –> Show Key

 

If you get an error as depicted below when you click OK after entering your ACS namespace and management key, despite the key belonging to the same ACS namespace

 

image

 

You may get this error for the following few reasons

1. if your ACS namespace was created in the old management portal

2. if there is typo in either namespace name or management key or preceding / succeeding spaces or invisible special characters etc ..

3. if the management key does not belong to the corresponding ACS namespace or vice versa

 

In my case the issue was I was using an ACS namespace that I created long back in old management portal and was getting above error and I resolved it by creating a new ACS namespace in new management portal.

Delegation Object report “ACS90013: Object exceeded its maximum length”

$
0
0

In case ACS 2.0is being implemented and following code is used to create aDelegation object for OAuth 2.0 as below

 Delegation delegation = new Delegation() {

                        NameIdentifier = userName,

                        IdentityProvider = identityProvider,

                        RelyingPartyId = relyingPartyId,

                        ServiceIdentityId = serviceIdentityId,               

                        };

 svc.AddToDelegations( delegation );

An ACS exception message may be returned as follows

 "<?xml version=\"1.0\" encoding=\"utf-8\" standalone=\"yes\"?>\r\n<error xmlns=\"http://schemas.microsoft.com/ado/2007/08/dataservices/metadata\">\r\n  <code></code>\r\n<message xml:lang=\"en-US\">ACS90013: Object exceeded its maximum length. Trace ID: 3a0afb00-7ba7-4d38-b91a-558801df0972. Timestamp: 2013-07-15 02:46:51Z</message>\r\n</error>"

 

While the error message is pretty self-explanatory the crucial part is what length will lead to the error. The MSDN document for Delegationdoesn’t  specify what is the maximum length of the string that can be assigned to “NameIdentifier” and “IdentityProvider.   

I did some debugging (yeah I work at Microsoft CSS so I can do funny stuff ) and found that when we create a Delegation object we can have max length of 256 characters for both “NameIdentifier” and “IdentityProvider".  

So keep the length of the strings lower than 256 and the error will not show up.  So check for the length of the variables being assigned to “NameIdentifier” and “IdentityProvider” to avoid the errorACS90013: Object exceeded its maximum length.

 

Angshuman Nayak , Escalation Services, Cloud Integration Engineering

 

 

ACS60021: The request has been terminated by the server (tenant exceeded rate limit)

$
0
0

In case you are using ACS aggressively sometime you may hit upon the following error. 

 ACS60021: The request has been terminated by the server (tenant exceeded rate limit). Trace ID: d80e94a4-8571-45cc-82f1-b900f7f3ce16. Timestamp: 2013-07-26 01:39:25Z

 ----- Exception -----

 Expected:

 Actual: System.Data.Services.Client.DataServiceQueryException: An error occurred while processing this request. --->
System.Data.Services.Client.DataServiceClientException: <?xml version="1.0" encoding="utf-8" standalone="yes"?>

 <error xmlns="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata">
 <code></code>
 <message xml:lang="en-US">ACS60021: The request has been terminated by the server (tenant exceeded rate limit). Trace ID: d80e94a4-8571-45cc-82f1-b900f7f3ce16.
Timestamp: 2013-07-26 01:39:25Z</message>
 </error>
 at System.Data.Services.Client.QueryResult.ExecuteQuery()
 at System.Data.Services.Client.DataServiceRequest.Execute[TElement](DataServiceContext context, QueryComponents queryComponents)
 --- End of inner exception stack trace ---
 at System.Data.Services.Client.DataServiceRequest.Execute[TElement](DataServiceContext context, QueryComponents queryComponents)
 at System.Data.Services.Client.DataServiceQuery`1.Execute()
 at System.Data.Services.Client.DataServiceQuery`1.GetEnumerator()
 at Microsoft.DataTransfer.Authentication.AcsToken.Management.AcsManagementServiceExtensions.DeleteServiceIdentityIfExists(ManagementService svc, String name) in

You might have hit a cul-de-sac while trying to find more information considering that the error code and error message didn’t result in any relevant information by search engines. So I dug in deep and got the real issue behind it.  But not anymore and it's time to break the mystry.

This is due to the reason that ACS rejects token requests when ACS internal resources are temporarily consumed by a high token request rate from all namespaces. In this case, ACS returns an HTTP 503 "Service unavailable" error with the ACS90046 or ACS60021 (ACS Busy) error codes.

We have now documented the limitations of ACS service at http://msdn.microsoft.com/en-us/library/gg185909.aspx. The page with error codes http://msdn.microsoft.com/en-us/library/gg185949.aspx has also been updated for ACS90046 (ACS busy) or ACS60021.

Please refer this page with ACS service limitations and the page with details on ACS retries guidelines, http://msdn.microsoft.com/en-us/library/jj878112.aspx.

Hope this blog gave you an insight into the issue and cleared a few threads of doubt around ACS.

Deploying Claims Aware Azure Applications using WIF

$
0
0

If you are deploying a claims aware application that uses Windows Identity Model whether first time or subsequently after an SDK upgrade there are many chances that you would hit the below mentioned error.

---> System.Runtime.Serialization.SerializationException: Type is not resolved for member

'Microsoft.IdentityModel.Claims.ClaimsPrincipal,Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'.

at System.AppDomain.get_Evidence()

at System.AppDomain.get_Evidence()

      at System.Configuration.ClientConfigPaths.GetEvidenceInfo(AppDomain appDomain, String exePath, String& typeName)

      at System.Configuration.ClientConfigPaths.GetTypeAndHashSuffix(AppDomain appDomain, String exePath)

      at System.Configuration.ClientConfigPaths..ctor(String exePath, Boolean includeUserConfig)

      at System.Configuration.ClientConfigPaths.GetPaths(String exePath, Boolean includeUserConfig)

      at System.Configuration.ClientConfigurationHost.RequireCompleteInit(IInternalConfigRecord record)

      at System.Configuration.BaseConfigurationRecord.GetSectionRecursive(String configKey, Boolean getLkg, Boolean checkPermission, Boolean getRuntimeObject, Boolean requestIsHere, Object& result, Object& resultRuntimeObject)

      at System.Configuration.BaseConfigurationRecord.GetSection(String configKey)

      at System.Xml.XmlConfiguration.XmlReaderSection.CreateDefaultResolver()

      at System.Xml.Schema.XmlSchema.Read(XmlReader reader, ValidationEventHandler validationEventHandler)

 

If you RDP to the role and look into D:\Windows\assembly you will not find the Microsoft.IdentityModel and associated dlls.

The reason is that WIF is not a part of the base operating system image put on a virtual machine when it is prepped for a web or worker role running on Windows 2008R2 or lower. So the WIF module needs to be installed before the role is started so that the code finds the related dependencies in GAC.

The following steps need to be followed

a)      Download the WIF msu from the link below http://www.microsoft.com/en-us/download/details.aspx?id=17331 and add the file “Windows6.1-KB974405-x64.msu” to the project.

b)      Create a batch file named say installWIF.cmd

         For this open a notepad and add the following content

        @echo off

        sc config wuauserv start= demand

        wusa.exe "Windows6.1-KB974405-x64.msu" /quiet /norestart

        sc config wuauserv start= disabled

        exit /b 0  

c)      Add this file in the visual studio project.

d)      Mark both the batch file and msu file as “Copy to Output Directory” in visual studio. This will make sure the batch file ends up in the bin folder of your role, which is the location Windows Azure will look for it:

           

 

e)        Create the startup task in by adding this code to ServiceDefinition.csdef in the web  role:

                 <Startup>

      <TaskcommandLine="installWIF.cmd"executionContext="elevated" />

      </Startup>

 

Redeploy the package and the role should come up fine now and able to load the WIF modules. You can also check in D:\Windows\assembly that the modules are installed in GAC.

If you are deploying to Windows 2012 (OS3.0) virtual machine then WIF 4.5 is already a part of the platform and you don’t need to do the grunge work above. But with WIF 4.5 a few namespaces have been modified and you might want to look at Guidelines for Migrating an Application Built Using WIF 3.5 to WIF 4.5

So lo and behold your claims ware application is now up and running.

ACS: Identifying and Updating Expiring Certificates, Symmetric Keys and Passwords

Errors ID4175 and WIF10201 in context of ACS

$
0
0

The purpose of this blog to present a couple of error messages I ran into during setting up a Single Sign-on from Active Directory to a web application using Windows Azure Access Control Service(ACS).

I configured my Microsoft Active Directory Federation Services(AD FS) 2.0 server as an Identity Provider and setup my web application as a relying party application in ACS.

http://msdn.microsoft.com/en-us/library/windowsazure/gg429779.aspx and http://msdn.microsoft.com/en-us/library/windowsazure/gg185961.aspx are good references for this.

I am using a self-signed certificate in ACS for Token Signing and I configured the certificate in the management portal for my ACS namespace as shown below.

clip_image001[4]

I added the necessary sections in the <system.identityModel> section of the web.config file for the web application to integrate with ACS.

Now when I run my web application, I get redirected to the login page from ACS and I select my ADFS identity provider to login and provide credentials for my AD user and I get this error:

SecurityTokenException: ID4175: The issuer of the security token 
was not recognized by the IssuerNameRegistry. To accept security tokens from this issuer, configure the
IssuerNameRegistry to return a valid name for this issuer.]
   System.IdentityModel.Tokens.Saml2SecurityTokenHandler.
ValidateToken(SecurityToken token)
   System.IdentityModel.Tokens.SecurityTokenHandlerCollection.
ValidateToken(SecurityToken token)
   System.IdentityModel.Services.TokenReceiver.
AuthenticateToken(SecurityToken token, Boolean ensureBearerToken,
String endpointUri)
   System.IdentityModel.Services.WSFederationAuthenticationModule.
SignInWithResponseMessage(HttpRequestBase request)
   System.IdentityModel.Services.WSFederationAuthenticationModule.
OnAuthenticateRequest(Object sender, EventArgs args)
   System.Web.SyncEventExecutionStep.System.Web.HttpApplication.
IExecutionStep.Execute()
   System.Web.HttpApplication.ExecuteStep(IExecutionStep step,
Boolean& completedSynchronously)

Since I am using a self-signed certificate, I add the following to my <identityConfiguration> section within <system.identityModel> to get past the error.

<issuerNameRegistry type="System.IdentityModel.Tokens.ValidatingIssuerNameRegistry,  System.IdentityModel.Tokens.ValidatingIssuerNameRegistry">
  <authority name="https://imtiazhnamespace.accesscontrol.windows.net/">
  <keys>
       <add thumbprint="9DFF02F5DF0F9346CA9E9EFA7BF7D14BF99DE1EA" />
  </keys>
<validIssuers>
  <add name="https://imtiazhnamespace.accesscontrol.windows.net/" />
</validIssuers>
</authority>
</issuerNameRegistry>
</identityConfiguration>
</system.identityModel>

Now when I run the application, I get the following error, which got me stumped, because the thumbprint in my web.config does match the thumbprint of my token signing certificate in ACS.

SecurityTokenValidationException: WIF10201: No valid key mapping 
found for securityToken:
'System.IdentityModel.Tokens.X509SecurityToken' and issuer: 'https://imtiazhnamespace.accesscontrol.windows.net/'.]
   System.IdentityModel.Tokens.Saml2SecurityTokenHandler.
ValidateToken(SecurityToken token)
   System.IdentityModel.Tokens.SecurityTokenHandlerCollection.
ValidateToken(SecurityToken token)
   System.IdentityModel.Services.TokenReceiver.
AuthenticateToken(SecurityToken token, Boolean ensureBearerToken,
String endpointUri)
   System.IdentityModel.Services.WSFederationAuthenticationModule.
SignInWithResponseMessage(HttpRequestBase request)
   System.IdentityModel.Services.WSFederationAuthenticationModule.
OnAuthenticateRequest(Object sender, EventArgs args)
   System.Web.SyncEventExecutionStep.System.Web.HttpApplication.
IExecutionStep.Execute()
   System.Web.HttpApplication.ExecuteStep(IExecutionStep step,
Boolean& completedSynchronously)

It turned that when I pasted the thumbprint value in visual studio from the certificates snap-in, an extra (invisible) Unicode character got copied and so the certificate’s thumbprint did not match.

The following KB that talks about it. I tried saving in notepad and it does report that the document contains unicode characters.

http://support.microsoft.com/kb/2023835

clip_image002[4]

I then deleted the first invisible character and got it to work.
I could have also copied the thumbprint from the Azure management portal(the first snapshot above) and not run into this, but I happened to have the same certificate installed on my web server, so I chose to copy from the MMC and inadvertently spent some time troubleshooting it :)

Azure Service Bus AMQP Using Java SDK : Peer did not create remote endpoint for link, target: amqp_queue

$
0
0

 

While setting up an Azure Service Bus AMQP Java project in Eclipse by following the code from How to Use JMS with AMQP 1.0 in Azure with Eclipse I continuously got the following error

javax.jms.JMSException: Peer did not create remote endpoint for link, target: amqp_queue_portal at org.apache.qpid.amqp_1_0.jms.impl.MessageProducerImpl.<init>(MessageProducerImpl.java:77) at org.apache.qpid.amqp_1_0.jms.impl.SessionImpl.createProducer(SessionImpl.java:348) at org.apache.qpid.amqp_1_0.jms.impl.SessionImpl.createProducer(SessionImpl.java:63) at SimpleSenderReceiver.<init>(SimpleSenderReceiver.java:41) at SimpleSenderReceiver.main(SimpleSenderReceiver.java:59)

Caused by: org.apache.qpid.amqp_1_0.client.Sender$SenderCreationException: Peer did not create remote endpoint for link, target: testqueue at org.apache.qpid.amqp_1_0.client.Sender.<init>(Sender.java:171) at org.apache.qpid.amqp_1_0.client.Sender.<init>(Sender.java:104) at org.apache.qpid.amqp_1_0.client.Sender.<init>(Sender.java:97) at org.apache.qpid.amqp_1_0.client.Sender.<init>(Sender.java:83) at org.apache.qpid.amqp_1_0.client.Sender.<init>(Sender.java:69) at org.apache.qpid.amqp_1_0.client.Sender.<init>(Sender.java:63) at org.apache.qpid.amqp_1_0.client.Session.createSender(Session.java:74) at org.apache.qpid.amqp_1_0.client.Session.createSender(Session.java:66) at org.apache.qpid.amqp_1_0.jms.impl.MessageProducerImpl.<init>(MessageProducerImpl.java:72)

The Queue in this case was created from the Azure Management Portal. A search on internet pointed to a lot of hits on stackoverflow but none of them seemed to provide a conclusive answer. So I debugged the Java code and read through some of the AMQP documentation at

https://apache.googlesource.com/qpid/+/c8d0fb167d8fc89fcb27823414454675b60a9dc1/qpid/java/amqp-1-0-client/src/main/java/org/apache/qpid/amqp_1_0/client/Sender.java

http://msdn.microsoft.com/en-us/library/azure/hh780773.aspx

Later I created a Queue using code instead of the Management Portal and with this new queue the Java code worked fine.

connectionString  =     CloudConfigurationManager.GetSetting("Microsoft.ServiceBus.ConnectionString");
var namespaceManager =
    NamespaceManager.CreateFromConnectionString(connectionString);
 
            if (!namespaceManager.QueueExists("amqp_queue_code"))
            {
                namespaceManager.CreateQueue("amqp_queue_code");
            }
 

So I used Service Bus Explorer to find the property difference of the queues  amqp_queue_portal & amqp_queue_code. Found that it fails if the Queue is “Partitioned”. AMQP seems to need message ordering. If I create a queue from portal by quick create it will create a ”Partitioned“ queue by default. So when you create a queue from portal Select > Custom Create > Un-check the ”Enable Partitioning“. It should look as below.

clip_image001

I am able to get the messages now using the Java AMQP code published at http://azure.microsoft.com/en-us/documentation/articles/service-bus-java-how-to-use-jms-api-amqp/

clip_image003

Hope this blog helps you overcome the javax.jms.JMSException: Peer did not create remote endpoint for link, target: amqp_queue_portal error.

Angshuman Nayak

Cloud Integration Engineering


Troubleshooting Scenario – High CPU usage on PaaS Roles with the same load after a running for a few hours

$
0
0

 

I had this interesting issue reported where the instance count would increase from 10 to 50 over the course of the month. This used to happen with the exact similar load and number of users. This was really perplexing for the customer’s Azure application developers and hence they reported the issue to Microsoft Azure Support.

At the very first analysis we found they used to manually increase the instance count as the existing instances used to hit CPU around 90%  and forming a plateau there, and hence becoming more or less unresponsive. So it was relatively simple to isolate the cause of the increase in instance count. The crucial thing was to find the cause of the high CPU. 

High CPU on the instance is generally caused by the application code. So we took memory dump on one of the instance when it gets into high CPU situation.  The way we analyze is like the one I did for a situation detailed in this blog http://blogs.msdn.com/b/cie/archive/2013/11/28/windows-azure-worker-role-showing-high-cpu.aspx

Process to collect dumps  

  a) RDP to the instance running the Cloud Service.  
  b) From task manager check which process is taking most CPU and is staying there without coming down.
  c) Right click and collect a Full Crash Dump and repeat this every 1 minute for say 5 minutes. So we will have 5 dump files.
  d) Once you have the files you can analyze it or create a ticket with Microsoft for an engineer to help analyze.

In this case it was W3WP process so we collected process dumps on it. In the first 2 memory dumps I didn’t find any high CPU.

Dump Analysis  

I am not delving into the details on how I did it as it merits a separate discussion. In all the dumps I see the CPU moving between 81% to 100%. Most of the calls that are stuck are as following

SP               IP               Function                                                                                                                                                                                                                                                        Source
00000011a7ff9008 0000000000000000 HelperMethodFrame                                                                                                                                                                                                                                               
00000011a7ff9150 00007ff90caf1177 Microsoft.Data.OData.DuplicatePropertyNamesChecker.CheckForDuplicatePropertyNames(Microsoft.Data.OData.ODataProperty)                                                                                                                                           
00000011a7ff91b0 00007ff90caee6d8 Microsoft.Data.OData.Atom.ODataAtomPropertyAndValueDeserializer.ReadPropertiesImplementation(Microsoft.Data.Edm.IEdmStructuredType, System.Collections.Generic.List`1<Microsoft.Data.OData.ODataProperty>, Microsoft.Data.OData.DuplicatePropertyNamesChecker,  
00000011a7ff9240 00007ff90caede16 Microsoft.Data.OData.Atom.ODataAtomEntryAndFeedDeserializer.ReadAtomContentElement(Microsoft.Data.OData.Atom.IODataAtomReaderEntryState)                                                                                                                        
00000011a7ff92c0 00007ff90caec553 Microsoft.Data.OData.Atom.ODataAtomEntryAndFeedDeserializer.ReadAtomElementInEntry(Microsoft.Data.OData.Atom.IODataAtomReaderEntryState)                                                                                                                        
00000011a7ff9300 00007ff90caec2a2 Microsoft.Data.OData.Atom.ODataAtomEntryAndFeedDeserializer.ReadEntryContent(Microsoft.Data.OData.Atom.IODataAtomReaderEntryState)                                                                                                                              
00000011a7ff9370 00007ff90cae9665 Microsoft.Data.OData.Atom.ODataAtomReader.ReadEntryStart()                                                                                                                                                                                                      
00000011a7ff93e0 00007ff90caf2ef6 Microsoft.Data.OData.Atom.ODataAtomReader.ReadAtEntryEndImplementation()                                                                                                                                                                                        
00000011a7ff9430 00007ff90cae88df Microsoft.Data.OData.ODataReaderCore.ReadImplementation()                                                                                                                                                                                                       
00000011a7ff9480 00007ff90cae8727 Microsoft.Data.OData.ODataReaderCore.InterceptException[[System.Boolean, mscorlib]](System.Func`1<Boolean>)                                                                                                                                                     
00000011a7ff94f0 00007ff90cca40a6 Microsoft.WindowsAzure.Storage.Table.Protocol.TableOperationHttpResponseParsers.TableQueryPostProcessGeneric[[System.__Canon, mscorlib]](System.IO.Stream, System.Func`6<System.String,System.String,System.DateTimeOffset,System.Collections.Generic.IDictiona 
00000011a7ff9580 00007ff90cca3df1 Microsoft.WindowsAzure.Storage.Table.TableQuery`1+<>c__DisplayClassf`2[[System.__Canon, mscorlib],[System.__Canon, mscorlib],[System.__Canon, mscorlib]].<QueryImpl>b__e(Microsoft.WindowsAzure.Storage.Core.Executor.RESTCommand`1<Microsoft.WindowsAzure.Stor 
00000011a7ff95e0 00007ff90caddc69 Microsoft.WindowsAzure.Storage.Core.Executor.Executor.ProcessEndOfRequest[[System.__Canon, mscorlib]](Microsoft.WindowsAzure.Storage.Core.Executor.ExecutionState`1<System.__Canon>)                                                                            
00000011a7ff9630 00007ff90cad99e5 Microsoft.WindowsAzure.Storage.Core.Executor.Executor.ExecuteSync[[System.__Canon, mscorlib]](Microsoft.WindowsAzure.Storage.Core.Executor.StorageCommandBase`1<System.__Canon>, Microsoft.WindowsAzure.Storage.RetryPolicies.IRetryPolicy, Microsoft.WindowsAz 
00000011a7ff9970 00007ff90cca30ae Microsoft.WindowsAzure.Storage.Table.TableQuery`1+<>c__DisplayClass7[[System.__Canon, mscorlib]].<ExecuteInternal>b__6(Microsoft.WindowsAzure.Storage.IContinuationToken)                                                                                       
00000011a7ff99d0 00007ff90cca2fd8 Microsoft.WindowsAzure.Storage.Core.Util.CommonUtility+<LazyEnumerable>d__0`1[[System.__Canon, mscorlib]].MoveNext()                                                                                                                                            
00000011a7ff9a40 00007ff90cca22f8 MoviePlayer.TableStorage.GetListRangeEntity[[System.__Canon, mscorlib]](System.Collections.Generic.List`1<MoviePlayer.QueryTableStorage>, System.String)                                                                                                        
00000011a7ff9b70 00007ff90cca0612 MoviePlayer.NoSQLData.GetListCategoryTS(MoviePlayer.Video, System.String)                                                                                                                                                                          
00000011a7ffa020 00007ff90cca3ab8 MoviePlayer.NoSQLData.GetCategoryLatest(MoviePlayer.Video, System.String)                                                                                                                                                                            
00000011a7ffa090 00007ff90cafda64 RepSyndWebApplication.Player.Default.Page_Load(System.Object, System.EventArgs)                                                                                                                                                                             
00000011a7ffa640 00007ff962abc0b7 System.Web.UI.Control.LoadRecursive()                                                                                                                                                                                                                           
00000011a7ffa690 00007ff962adcc4a System.Web.UI.Page.ProcessRequestMain(Boolean, Boolean)                                                                                                                                                                                                         
00000011a7ffa750 00007ff962adbec9 System.Web.UI.Page.ProcessRequest(Boolean, Boolean)                                                                                                                                                                                                             
00000011a7ffa7c0 00007ff962adbd27 System.Web.UI.Page.ProcessRequest()                                                                                                                                                                                                                             
00000011a7ffa860 00007ff962ada453 System.Web.UI.Page.ProcessRequest(System.Web.HttpContext)                                                                                                                                                                                                       
00000011a7ffa8b0 00007ff962ae4b61 System.Web.HttpApplication+CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()                                                                                                                                                         
00000011a7ffa990 00007ff962aabee5 System.Web.HttpApplication.ExecuteStep(IExecutionStep, Boolean ByRef)                                                                                                                                                                                           
00000011a7ffaa30 00007ff962ac954a System.Web.HttpApplication+PipelineStepManager.ResumeSteps(System.Exception)                                                                                                                                                                                    
00000011a7ffab80 00007ff962aac0f3 System.Web.HttpApplication.BeginProcessRequestNotification(System.Web.HttpContext, System.AsyncCallback)                                                                                                                                                        
00000011a7ffabd0 00007ff962aa613e System.Web.HttpRuntime.ProcessRequestNotificationPrivate(System.Web.Hosting.IIS7WorkerRequest, System.Web.HttpContext)                                                                                                                                          
00000011a7ffac70 00007ff962aaefb1 System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper(IntPtr, IntPtr, IntPtr, Int32)                                                                                                                                                              
00000011a7ffae80 00007ff962aae9e2 System.Web.Hosting.PipelineRuntime.ProcessRequestNotification(IntPtr, IntPtr, IntPtr, Int32)                                                                                                                                                                    
00000011a7ffaed0 00007ff9632028d1 DomainNeutralILStubClass.IL_STUB_ReversePInvoke(Int64, Int64, Int64, Int32)                                                                                                                                                                                     
00000011a7ffb6e8 0000000000000000 InlinedCallFrame                                                                                                                                                                                                                                                
00000011a7ffb6e8 0000000000000000 InlinedCallFrame                                                                                                                                                                                                                                                
00000011a7ffb6c0 00007ff962b5838b DomainNeutralILStubClass.IL_STUB_PInvoke(IntPtr, System.Web.RequestNotificationStatus ByRef)                                                                                                                                                                    
00000011a7ffb790 00007ff962aaf19f System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper(IntPtr, IntPtr, IntPtr, Int32)                                                                                                                                                              
00000011a7ffb9a0 00007ff962aae9e2 System.Web.Hosting.PipelineRuntime.ProcessRequestNotification(IntPtr, IntPtr, IntPtr, Int32)                                                                                                                                                                    
00000011a7ffb9f0 00007ff9632028d1 DomainNeutralILStubClass.IL_STUB_ReversePInvoke(Int64, Int64, Int64, Int32)                                                                                                                                                                                     
00000011a7ffda48 0000000000000000 InlinedCallFrame                                                                                                                                                                                                                                                
00000011a7ffda48 0000000000000000 InlinedCallFrame                                                                                                                                                                                                                                                
00000011a7ffda20 00007ff962b5838b DomainNeutralILStubClass.IL_STUB_PInvoke(IntPtr, System.Web.RequestNotificationStatus ByRef)                                                                                                                                                                    
00000011a7ffdaf0 00007ff962aaf19f System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper(IntPtr, IntPtr, IntPtr, Int32)                                                                                                                                                              
00000011a7ffdd00 00007ff962aae9e2 System.Web.Hosting.PipelineRuntime.ProcessRequestNotification(IntPtr, IntPtr, IntPtr, Int32)                                                                                                                                                                    
00000011a7ffdd50 00007ff9632028d1 DomainNeutralILStubClass.IL_STUB_ReversePInvoke(Int64, Int64, Int64, Int32)                                                                                                                                                                                     
00000011a7ffec98 0000000000000000 InlinedCallFrame                                                                                                                                                                                                                                                
00000011a7ffec98 0000000000000000 InlinedCallFrame                                                                                                                                                                                                                                                
00000011a7ffec70 00007ff962b5838b DomainNeutralILStubClass.IL_STUB_PInvoke(IntPtr, System.Web.RequestNotificationStatus ByRef)                                                                                                                                                                    
00000011a7ffed40 00007ff962aaf19f System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper(IntPtr, IntPtr, IntPtr, Int32)                                                                                                                                                              
00000011a7ffef50 00007ff962aae9e2 System.Web.Hosting.PipelineRuntime.ProcessRequestNotification(IntPtr, IntPtr, IntPtr, Int32)                                                                                                                                                                    
00000011a7ffefa0 00007ff9632028d1 DomainNeutralILStubClass.IL_STUB_ReversePInvoke(Int64, Int64, Int64, Int32)                                                                                                                                                                                     
00000011a7fff1a8 0000000000000000 ContextTransitionFrame

The common function in all the call stacks  is

MoviePlayer.TableStorage.GetListRangeEntity[[System.__Canon, mscorlib]](System.Collections.Generic.List`1<MoviePlayer.QueryTableStorage>, System.String)

It is executing the following code

public System.Collections.Generic.List<T> GetListRangeEntity<T>(System.Collections.Generic.List<MoviePlayer.QueryTableStorage> queue, string query)
{
  string string1 = "";
  if (!string.IsNullOrWhiteSpace(query)) goto lab1;
  if (string.IsNullOrWhiteSpace(query))
  {
    if (que != null)
    {
      System.Collections.Generic.List<MoviePlayer.QueryTableStorage>.Enumerator enumerator1 = queue.GetEnumerator();
      try
      {
        while (enumerator1.MoveNext())
        {
          MoviePlayer.QueryTableStorage storage1 = enumerator1.Current;
          switch (storage1.TypeTS)
          {
            case 0: goto lab2;
            case 1: goto lab3;
            case 2: goto lab4;
            case 3: goto lab5;
            case 4: goto lab6;
            case 5: goto lab7;
            case 6: goto lab8;
            case 7: goto lab9;
          }
          goto lab10;
        lab2:
          string1 = string.Concat(string1, Microsoft.WindowsAzure.Storage.Table.TableQuery.GenerateFilterCondition(storage1.KeyTS, storage1.OperationTS, storage1.ValueTS.ToString()));
          goto lab10;
        lab3:
          
   <SNIPPED>

The collection passed is as below.

0000000f58a9f1f0 System.Collections.Generic.List`1[[MoviePlayer.CategoryList, MoviePlayerDistList]]

The collection is as follows and contains 726 objects

MT                           Field           Offset        Type                 VT     Attr           Value                          Name

00007ff96af01250     4000cd1       8              System.Object[]   0      instance     0000000f58b9a7c8      _items

00007ff96af037c8     4000cd2       18             System.Int32       1      instance                           726      _size

00007ff96af037c8     4000cd3       1c             System.Int32       1      instance                           726      _version

00007ff96af011b8    4000cd4       10             System.Object     0      instance     0000000000000000    _syncRoot

Looking at the size of this object.

sizeof(0000000f58b40940) = 137048 (0x21758) bytes (MoviePlayer.CategoryList) 

This object MoviePlayer.CategoryList size is greater than 85,000 bytes. Any object greater than 85,000 bytes will not get allocated in the normal SOH heap but will go to the LOH (Large Object Heap). Details around LOH and GC can be found in the articles

http://msdn.microsoft.com/en-us/library/ee787088.aspx

http://msdn.microsoft.com/en-us/magazine/cc534993.aspx

If I look at the process uptime 1:45:31.000 = 6331 seconds in the first dump. If I look at the number of time GC has run it’s very high for Gen 2. It’s almost like Gen 2 collection is attempted in every two seconds.

.NET CLR Memory

Counter                               Value

===============     ==============

Bytes in All Heaps              84,495,440

GCHandles                        1,514

GEN 0 Collections              55,143

GEN 1 Collections              13,746

GEN 2 Collections             3,463

# Induced GCs                   0

# of Pinned Objects           2

Sync Blocks in use              121

Finalization Survivors          0

Total Commited Bytes        502,095,872

Total Reserved Bytes          18,253,578,240

GEN 0 Heap Size                26,423,840

GEN 1 Heap Size                2,989,384

GEN 2 Heap Size                52,896,880

LOH Size                            2,185,336

% Time in GC                     7.90%

So the Action Plan for this issue was to reduce the size of the object for MoviePlayer.CategoryList. Since most outside developer support engineer roles are not familiar with post mortem analysis the following could be used to find the size of the object using .Net or Visual Studio

http://stackoverflow.com/questions/324053/find-out-the-size-of-a-net-object

 After implementing the suggestions the CPU grows in linear fashion with load and not exponential. The CPU stopped hitting >90% and staying there, so there was no need to spawn additional instances of the role to server users.

Hope this article helps in understanding one of the fundamental causes of frequent GC leading to high CPU. It’s not just Azure but could happen on premises application as well.

Regards,

Angshuman Nayak

Cloud Integration Engineer

Not Able to Delete Storage Account – Ensure these image(s) and/or disk(s) are removed before deleting this storage account

$
0
0

 

While deleting an Azure Storage Account you might come across the following error.

Storage account portalvhds9x8ddnOgp9tn2 has some active image(s) and/or disk(s), e.g. annayakNE-annayakNE-O-201410240936090519. Ensure these image(s) and/or disk(s) are removed before deleting this storage account.

SCENARIO 1 – DISKS

Image1

A storage account can’t be deleted if it has VHDs that are attached as disk. These disks are created when creating Azure IaaS VM and you might have deleted the VMs but the disks are still around with a lease on the VHD located in this storage account.

Below steps which will help you the delete all VHD blobs and storage containers for your account.

Please follow the below steps :

1. Delete the necessary VM which has a lease on this VHD (If not already deleted).

2. Delete the associated disks/images, while deleting these, please ensure to select ”Delete the associated VHD“. You could also delete the VHDs manually.

clip_image004

clip_image006

clip_image008

3. Once the associated VHD’s are deleted, you will be able to delete the storage account.

clip_image010

Image6

SCENARIO 2 – IMAGES

Login to Azure portal.
Navigate to Virtual Machine -> Images.

clip_image001[4]

Select the image : Annayak-1-8-1-0-1-Ubuntu-12-10.
Delete the image.
You can chose to delete the associated VHD.
After deleting the VHD, you should be able to delete the storage account.

Hope this helps you delete your storage accounts when you get the error “Storage account portalvhds9x8ddnOgp9tn2 has some active image(s) and/or disk(s), e.g. annayakNE-annayakNE-O-201410240936090519. Ensure these image(s) and/or disk(s) are removed before deleting this storage account”.

 

 

Regards,
Angshuman Nayak
Cloud Integration Engineering

Azure Storage Queue – Randomly Getting 403 Forbidden on delete of queue message with REST API

$
0
0

 

While developing an application that reads and deletes messages on an Azure storage Queue using the REST API (not the Azure Storage libraries), some requests (but not all) to delete a message are returned by a 403 error and the message is not deleted. This does not happen all the time. In many cases it works fine but it seems to randomly fail for a few requests.

The remote server returned an error: (403) Forbidden.AuthenticationFailed. Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. The MAC signature found in the HTTP request ‘ALWhzP+84PAKpkQLpDj8Sl4MtnGkla3P0WjLkRaPDl4=’ is not the same as any computed signature.

So I took a fiddler to log the request and response.

This delete worked!

GETTING THE  MESSAGE  FROM  THE  QUEUE

Request

GET http://annayakstorage.queue.core.windows.net/restapiqueuecefbcdda-aadc-4676-9800-b96022ed78f6/messages HTTP/1.1
x-ms-date: Mon, 23 Feb 2015 16:24:59 GMT
x-ms-version: 2009-09-19
Authorization: SharedKey annayakstorage:gPlR4ol9dgBPfW9B/KQ9jKdSLZP8lakXKGQL73/xNQf=
Accept: application/atom+xml,application/xml
Host: annayakstorage.queue.core.windows.net

Response

HTTP/1.1 200 OK
Cache-Control: no-cache
Transfer-Encoding: chunked
Content-Type: application/xml
Server: Windows-Azure-Queue/1.0 Microsoft-HTTPAPI/2.0
x-ms-request-id: gp9a9lfq-0007-0056-9k3q-9003l8000000
x-ms-version: 2009-09-19
Date: Mon, 23 Feb 2015 16:25:01 GMT
 
<?xml version="1.0" encoding="utf-8"?>
<QueueMessagesList>
<QueueMessage>
<MessageId>4g8ap7be-573k-6o9d-97ct-4k73k935gldk </MessageId>
<InsertionTime>Mon, 25 Feb 2015 16:24:59 GMT</InsertionTime>
<ExpirationTime>Mon, 04 Mar 2015 16:24:59 GMT</ExpirationTime>
<DequeueCount>1</DequeueCount>
<PopReceipt> AgAAAAMAAAAAAAAAFFKoV9QS0KF=</PopReceipt>
<TimeNextVisible>Mon, 25 Feb 2015 16:25:31 GMT</TimeNextVisible>
<MessageText> PQPxl9KmQLFaplGzSPQldxUaEL9lqPztDIF=</MessageText>
</QueueMessage>
</QueueMessagesList>

DELETING THE  MESSAGE  FROM  THE  QUEUE

Request

DELETE http://annayakstorage.queue.core.windows.net/restapiqueuecefbcdda-aadc-4676-9800-b96022ed78f6/messages/8e3be9ab-759b-4e0c-88bc-9c67d524bcad?popreceipt=AgAAAAMAAAAAAAAAFFKoV9QS0KF=HTTP/1.1  
x-ms-date: Mon, 25 Feb 2015 16:24:59 GMT
x-ms-version: 2009-09-19
Authorization: SharedKey annayakstorage:zelCPqaDnaqGqXi1Eq8+5wpgAPZ0l73xuoC9D3C4k2c=
Host: annayakstorage.queue.core.windows.net

Response

HTTP/1.1 204 No Content
Content-Length: 0
Server: Windows-Azure-Queue/1.0 Microsoft-HTTPAPI/2.0
x-ms-request-id: gp9a8psq-0007-0056-8l6k-9003l8000000
x-ms-version: 2009-09-19
Date: Mon, 23 Feb 2015 16:25:01 GMT 

 

This Delete Failed!

GETTING THE  MESSAGE  FROM  THE  QUEUE

Request

GET http://annayakstorage.queue.core.windows.net/restapiqueuecefbcdda-aadc-4676-9800-b96022ed78f6/messages HTTP/1.1 
x-ms-date: Mon, 25 Feb 2015 16:24:59 GMT
x-ms-version: 2009-09-19
Authorization: SharedKey annayakstorage:gPlR4ol9dgBPfW9B/KQ9jKdSLZP8lakXKGQL73/xNQf=
Accept: application/atom+xml,application/xml
Host: annayakstorage.queue.core.windows.net

Response

HTTP/1.1 200 OK
Cache-Control: no-cache
Transfer-Encoding: chunked
Content-Type: application/xml
Server: Windows-Azure-Queue/1.0 Microsoft-HTTPAPI/2.0
x-ms-request-id: ge9sk924-0005-0078-843u-8114k9000000
x-ms-version: 2009-09-19
Date: Mon, 25 Feb 2015 16:25:01 GMT
 
<?xml version="1.0" encoding="utf-8"?>
<QueueMessagesList>
<QueueMessage>
<MessageId>8dvk9450-g8dk-6932-4j83-3429rslw8l2a</MessageId>
<InsertionTime>Mon, 25 Feb 2015 16:24:59 GMT</InsertionTime>
<ExpirationTime>Mon, 04 Mar 2015 16:24:59 GMT</ExpirationTime>
<DequeueCount>1</DequeueCount>
<PopReceipt>AgAAAAMAAAAAAAAADQ+uV9QS0KF=</PopReceipt>
<TimeNextVisible>Mon, 23 Feb 2015 16:25:31 GMT</TimeNextVisible>
<MessageText>YULzc2PaPAPbpwTkQgLsyxEmDL4laWaLWdP=</MessageText>
</QueueMessage>
</QueueMessagesList>
 

DELETING THE  MESSAGE  FROM  THE  QUEUE

Request

DELETE http://annayakstorage.queue.core.windows.net/plccommandsqueuecefbcdda-aadc-4676-9800-b96022ed78f6/messages/9cdb1031-b4fb-4079-8e59-4323efcd3e4c?popreceipt=AgAAAAMAAAAAAAAADQ+uV9QS0KF=HTTP/1.1 
x-ms-date: Mon, 25 Feb 2015 16:24:59 GMT
x-ms-version: 2009-09-19
Authorization: SharedKey annayakstorage:QKWlaP+39WLQalWPaKd7Ka9MwpAjbh9Q9GaDlPxAFl9=
Host: annayakstorage.queue.core.windows.net

Response

HTTP/1.1 403 Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
Content-Length: 783
Content-Type: application/xml
Server: Microsoft-HTTPAPI/2.0
x-ms-request-id: ge9sk924-0005-0093-843u-8114k9000000
Date: Mon, 25 Feb 2015 16:25:01 GMT

Error

<?xml version="1.0" encoding="utf-8"?><Error><Code>AuthenticationFailed</Code><Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
RequestId: ge9sk924-0005-0093-843u-8114k9000000
 
Time:2015-02-25T16:25:01.8486841Z</Message><AuthenticationErrorDetail>The MAC signature found in the HTTP request ‘QKWlaP+39WLQalWPaKd7Ka9MwpAjbh9Q9GaDlPxAFl9=’ is not the same as any computed signature. Server used following string to sign: 
'DELETE 
x-ms-date:Mon, 25 Feb 2015 16:24:59 GMT
x-ms-version:2009-09-19
/annayakstorage/restapiqueuecefbcdda-aadc-4676-9800-b96022ed78f6/messages/9cdb1031-b4fb-4079-8e59-4323efcd3e4c
popreceipt:AgAAAAMAAAAAAAAAFF wV4VP0AE='.</AuthenticationErrorDetail></Error>

After spending quite some hours through the traces I noticed that in the failing case the popreceipt is not the same as the one inside the message and hence it gives the error. 

Popreceipt in the request – DELETE http://annayakstorage.queue.core.windows.net/plccommandsqueuecefbcdda-aadc-4676-9800-b96022ed78f6/messages/9cdb1031-b4fb-4079-8e59-4323efcd3e4c?popreceipt= AgAAAAMAAAAAAAAADQ+uV9QS0KF= HTTP/1.1

Popreceipt in the error response – popreceipt : AgAAAAMAAAAAAAAADQ uV9QS0KF=

If you notice the +is gone. In all the working cases I see the popreceipt didn’t have a +’. So whenever the message has a popreceipt with a +as below it was failing.

<?xml version="1.0" encoding="utf-8"?>

<QueueMessagesList>

<QueueMessage>

<MessageId>8dvk9450-g8dk-6932-4j83-3429rslw8l2a</MessageId>

<InsertionTime>Mon, 25 Feb 2015 16:24:59 GMT</InsertionTime>

<ExpirationTime>Mon, 04 Mar 2015 16:24:59 GMT</ExpirationTime>

<DequeueCount>1</DequeueCount>

<PopReceipt>AgAAAAMAAAAAAAAADQ+uV9QS0KF=</PopReceipt>

<TimeNextVisible>Mon, 25 Feb 2015 16:25:31 GMT</TimeNextVisible>

<MessageText>YULzc2PaPAPbpwTkQgLsyxEmDL4laWaLWdP=</MessageText>

</QueueMessage>

</QueueMessagesList>

As per the standard reserved characters need to be URL encoded when transmitted over the internet. http://en.wikipedia.org/wiki/Percent-encoding

So I used the .Net class WebUtility (https://msdn.microsoft.com/en-us/library/zttxte6w(v=vs.110).aspx) and URL encoded the parameters like popreceipt.

The following changes were made to the code to encode the special character. 

String urlPath = String.Format("{0}/messages/{1}?popreceipt={2}", 
WebUtility.UrlEncode(queueName), WebUtility.UrlEncode(messageid),
WebUtility.UrlEncode(popreceipt));

It started working fine after that and the deletes don’t fail randomly anymore.

Regards,

Angshuman Nayak

Cloud Integration Engineer

The VM size (or combination of VM sizes) required by this deployment cannot be provisioned due to deployment request constraints

$
0
0

 

While trying to deploy a D-Series IaaS VM (could happen for A 8/A9 IaaS VMs as well) from the Azure Portal or PowerShell you may get the following error. It could also happen if you try to deploy or re-deploy a PaaS Cloud Service after increasing the VM size in the configuration to a Virtual Network(VNET) .

Current Portal

image

New Portal

image

Error:

The VM size (or combination of VM sizes) required by this deployment cannot be provisioned due to deployment request constraints. If possible, try relaxing constraints such as virtual network bindings, deploying to a hosted service with no other deployment in it and to a different affinity group or with no affinity group, or try deploying to a different region. The long running operation tracking ID was: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx. 

Please check if you are trying to deploy this IaaS VM to a Cloud Service that is part of an existing Virtual Network. The cause of the failure is that existing VNET (Virtual Networks) are attached to an Affinity Group. Affinity Group is bound to a set of servers.

Deployment Configuration for Local Virtual Network

<VirtualNetworkSitename=“VNetLocal” AffinityGroup= “VNetLocalAffinity”>

So you try the following options

a) Deploy this IaaS VM outside the VNET.

b) If it’s required to have this VM in the same VNET e.g. in a situation where it need to be part of the existing solution, then you need to convert the VNET from a Local Virtual Network to a Regional Virtual Network.

Details – http://azure.microsoft.com/blog/2014/05/14/regional-virtual-networks/

Regards,
Angshuman Nayak
Cloud Integration Engineer

Installing DebugDiag and importing rules thru Azure Cloud Services startup tasks

$
0
0

This article describes the steps for how to install DebugDiag version 2 update 1 on Cloud Services Web and Worker Roles using startup tasks.

Note: In this article, the steps were applied to a Worker Role, but it also works for Web Roles.

Preparing the DebugDiag installer and the configuration file with the files

  1. Download DebugDiag Debug Diagnostic Tool v2 Update 1 (The procedure was made with version x64 of DebugDiag 2 Update 1, but it was also tested with version 1.2 and it works fine) 
  2. Install DebugDiag on a machine where you can create the rule the way you want, and after created and activated, you can click on the Export button in right bottom side of the tool and Export to file named “DebugDiagRule.ddconfig”. For information about creating rules in DebugDiag, see Configuring DebugDiag to Automatically Capture a Full User Dump on a Managed Function:

Note: For Azure PaaS VMs environment, it’s highly recommended that user files, such as the Dumps in this case, are generated in C: drive which is the “user” drive and not in the D: drive (system) or E drive (Application drive). You can set up the “Userdump Path” to C:\DebugDiagDumps in the rule creation, or you can edit the exported file (see more information on the step 3).

 

 

       3.  In order make sure that the dump will be generated on the C: drive of your Cloud Service instance, using notepad, you can open the “DebugDiagRule.ddconfig” file with that you created in the previous step, and look for “DumpPath” and make sure it’s set to “C:\DebugDiagDumps”. See example:

 

<DebugDiag MaxClrExceptionDetailsPerSecond="30" MaxClrExceptionStacksPerSecond="20" MaxClrExceptionStacksPerExceptionType="10" MaxClrExceptionStacksTotal="100" MaxNativeExceptionStacksPerSecond="30"><Rules><Rule TargetType="PROCESS" TargetName="WaWorkerHost.exe" UFEActionType="" UFEActionLimit="0" MaxDumpLimit="10" MatchingLeakRuleKey="" PageheapType="" RuleType="CRASH" Active="TRUE" RuleName="Crash rule for all instances of WaWorkerHost.exe" DumpPath="C:\DebugDiagDumps"><Exceptions><Exception ExceptionCode="E0434352" ExceptionName="CLR (.NET) 4.0 Exception - System.NullReferenceException" ExceptionData="System.NullReferenceException" ExceptionData2="" ExceptionDataCheck="FALSE" ActionType="FULLDUMP" ActionLimit="3"/></Exceptions><Events/><Breakpoints/></Rule></Rules></DebugDiag>

 

Note: If the directory set in the DumpPath does not exist in the machine where the rule will be imported, DebugDiag will create it.

       4.  For a Role (Web or Worker Role)

    1. In Solution Explorer, under Roles in the cloud service project right click on your role and select Add>New Folder. Create a folder named Startup
    2. Right click on the Startup folder and select Add>Existing Item. Select the DebugDiag installer and the DebugDiag configuration file and add them to the Startup folder.

 

 

Define startup tasks for your roles

Startup tasks allow you to perform operations before a role starts. In this case, we will use a startup task for installing DebugDiag and another task for importing the configuration file containing the rule exported previously. For more information on startup tasks see: Run Startup Tasks in Azure.

  1. Add the following to the ServiceDefinition.csdef file under the WebRole or WorkerRole node for all roles:
<Startup>
<Task commandLine="Startup\Installer.cmd" executionContext="elevated" taskType="simple"/>
<Task commandLine="Startup\ImportDebugConfig.cmd" executionContext="elevated" taskType="simple"/>
</Startup>

The above configuration will run the console command Install.cmd and ImportDebugConfig.cmd with administrator privileges so it can install DebugDiag and right after that import the configuration file containing the rule.

        2.  Create the Installer.cmd file with the following content:

 if not exist "%ProgramFiles%"\DebugDiag\ msiexec /i %~dp0DebugDiagx64.msi /qn

 

The Installer script will first check if the DebugDiag folder exists, if not, it will install the DebugDiagx64.msi in silent mode.

Note: If you uninstall DebugDiag manually, the DebugDiag Folder will still exist inside Program Files folder, so the Installer will not install DebugDiag since the folder exists. This article is intended to have DebugDiag installed and with the imported rules running and activated again in case of a VM reimage, new instances etc.

       3.  Create the ImportDebugConfig.cmd file with the following content:

 "%ProgramFiles%\DebugDiag\DebugDiag.Collection.exe" /importConfig %~dp0DebugDiagRule.ddconfig –DoNotPrompt

 

The ImportDebugConfig script will import the configuration file. After that, the rule will be created and activated.

In case DebugDiag already has the rule imported (same name), this command will overwrite it.

 

NOTE:

Use a simple text editor like notepad to create this file. If you use Visual Studio to create a text file and then rename it to ‘.cmd’ the file may still contain a UTF-8 Byte Order Mark and running the first line of the script will result in an error. If you were to use Visual Studio to create the file leave add a REM (Remark) to the first line of the file so that it is ignored when run.

 

       4.  Add the Installer.cmd and ImportDebugConfig.cmd files to the roles by right click on the Startup folder inside the role and selecting Add>Existing Item. So the roles should now have the files DebugDiagRule.ddconfig, DebugDiagx64.msi, Installer.cmd and
ImportDebugConfig.cmd:

 

Deploying your service

When you deploy your service, the startup tasks will run, install DebugDiag and import the config file containing the rule. The installation and configuration of DebugDiag is fast, so after the instance is ready you can RDP to the instance and make sure DebugDiag is installed and have your rule activated.

Cloud Services roles recycling with the error “System.IO.FileLoadException: Could not load file or assembly”

$
0
0

You may be facing an issue where after a deploy, your Cloud Service role gets stuck in “starting” or “recycling” states. In this case, as the initial troubleshooting steps, we have to remote access the instance, start checking the logs and try to find out evidences about what can be causing the issue. For an excellent guidance about what logs to look, please refer to this excellent Kevin Williamson’s article.
If you get into the same situation above, one of the common causes may be the following exception:

 
1044
WaIISHost
Role entrypoint could not be created: System.TypeLoadException: Unable to load the role entry point due to the following exceptions:
– System.IO.FileLoadException: Could not load file or assembly ‘System.Web.Mvc, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35′ or one of its dependencies. The located assembly’s manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)
File name: ‘System.Web.Mvc, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35′
 
WRN: Assembly binding logging is turned OFF.
To enable assembly bind failure logging, set the registry value [HKLM\Software\Microsoft\Fusion!EnableLog] (DWORD) to 1.
Note: There is some performance penalty associated with assembly bind failure logging.
To turn this feature off, remove the registry value [HKLM\Software\Microsoft\Fusion!EnableLog].
 
—> System.Reflection.ReflectionTypeLoadException: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information.
   at System.Reflection.RuntimeModule.GetTypes(RuntimeModule module)
   at System.Reflection.RuntimeModule.GetTypes()
   at System.Reflection.Assembly.GetTypes()
   at Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.GetRoleEntryPoint(Assembly entryPointAssembly)
   — End of inner exception stack trace —
   at Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.GetRoleEntryPoint(Assembly entryPointAssembly)
   at Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.CreateRoleEntryPoint(RoleType roleTypeEnum)
   at Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.InitializeRoleInternal(RoleType roleTypeEnum)
 
the message resource is present but the message is not found in the string/message table

 

This exception is usually recorded in the Azure event log (see the next image) and it means that something in your project is referencing a wrong version of the assembly. In this case, it is failing to load the version 3.0.0.0 of System.Web.Mvc which is not the current version of the assembly (the current one in this case is 5.0.0.0) so that is where the exception happens.

 

 

The best way to fix this issue is to fix the wrong references inside your project. However, it can take some time if you are not sure what exactly is making the wrong references. In this case, the faster way is to use the bindingRedirect in the configuration files.
Usually, when a new assembly is added to your project, Visual Studio will automatically create a bindingRedirect entry in your web.config (Web Role) or app.config (Worker Role) just to avoid the wrong assembly version issue.

 

 

However, in Azure Cloud Services, the assembly bindings from web.config and app.config does not have effect, due to the fact that WaIISHost (Web Role) and WaWorkerHost (Worker Role) are not able to read these two configuration files, instead, they read the <role name>.dll.config file, and this is the file where the assembly binding configuration need to be. Please, refer to this article for more details.
The problem is, the <role name>.dll.config file is not added to Solution by default, and even if it is there, it may happen that it does not have the assembly binding configuration like in web.config or app.config.

 

Solution:

1) Open the <role name>.dll.config located in your project bin folder.
2) Check if there is the BindingRedirect entry that you need. If not, follow one of the two options below:
     a) Copy the web.config or app.config content (considering one of these two configuration files has the information that you need) and paste it into the <role name>.dll.config file.
     b) Manually create an Assembly Binding entry:
 

<runtime>
<assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
<dependentAssembly>
<assemblyIdentity name="Newtonsoft.Json" culture="neutral" publicKeyToken="30ad4fe6b2a6aeed" />
<bindingRedirect oldVersion="0.0.0.0-6.0.0.0" newVersion="6.0.0.0" />
</dependentAssembly>
</assemblyBinding>
</runtime>

NOTE: In order to discover the publicKeyToken, execute the following PowerShell command:

PS C:\Windows\Syetem32>([system.reflection.assembly]::loadfile("dll full path")).FullName

Where “dll Full path” is the dll location path. Example:

PS C:\WINDOWS\system32> ([system.reflection.assembly]::loadfile("C:\logs\Newtonsoft.Json.dll")).FullName

You will have the following output:

Newtonsoft.Json, Version=6.0.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed

 

3) Add the <role name>.dll.config file to your Solution (same level as the web.config or app.config) and set the Copy to Output Directory property to “Copy Always”.

 

 

4) Redeploy to your Cloud Service.

 

NOTE: As a quick test in case you are not able to make a new deploy, you can copy the <role name>dll.config file to the bin folder of your package into the instance in Azure (<application drive>:\approot\bin) and wait some minutes until the WaHostBootstrapper.exe process restart the WaIISHost.exe or WaWorkerHost.exe processes then the role will initiate normally. However, do not forget to redeploy, since all the manual change inside the Cloud Services instances will be lost in some time.

 

Automatically flushing DNS in Azure PaaS Cloud Services Instances

$
0
0

I have worked on a case where because a specific reason, it was needed to Flush DNS from the PaaS Cloud Service Instances each 8 hours. And this is completely possible, however, since we are talking about PaaS Cloud Services and we already know we can’t apply manual changes since the PaaS Instances are stateless, we will have to use Start up tasks and Windows Task Scheduler to get this done. Please see the following steps:

 

You will need to:

 1) Create a cmd file named “flushdns.cmd” with the flush DNS command (ipconfig /flushdns) and any other that you want or need (this cmd file will be used by Task Scheduler to flush DNS)

 2) Create another cmd file named “task-flushdns.cmd” and put a task scheduler command that set Task Scheduler to run the flushdns.cmd command file every 8 hours

Command:

Schtasks /create /tn FlushDNS /tr E:\approot\Startup\flushdns.cmd /sc hourly /mo 8 /ru System

 

            Command Details:

            a) This is the part of the command where the flushdns.cmd file is called “E:\approot\Startup\flushdns.cmd”

            b) This command is set to be executed with System Account in “/rn System

            c) After following the step 3, this is the location where the flushdns.cmd will be “E:\approot\Startup\”

            d) More details about setting Task Scheduler by command lines here.

 

 3) For a Role (Web or Worker Role)

    1. In Solution Explorer, under Roles in the cloud service project right click on your role and select Add>New Folder. Create a folder named Startup
    2. Right click on the Startup folder and select Add>Existing Item. Select the flushdns.cmd and task-flushdns.cmd files and add them to the Startup folder.

 

 

 4) Create a startup task in the ServiceDefinition.csdef: Now, you will have to create the startup task itself, and for this, you will need to Add the following to the ServiceDefinition.csdef file under the WebRole or WorkerRole node. For more information on startup tasks see: Run
Startup Tasks in Azure
.

 

 

 <Startup>

<Task commandLine="task-flushdns.cmd" executionContext="elevated" taskType="simple" />

</Startup>

 

Note: The above configuration will run the task-flushdns.cmd file which will configure Task Scheduler to run the flushdns.cmd command file, and re-run it after each 8 hours.

 

 5) Redeploy

 

Sources:

https://technet.microsoft.com/en-us/library/cc781949(v=ws.10).aspx

https://technet.microsoft.com/en-us/library/cc772785(v=ws.10).aspx

https://msdn.microsoft.com/library/azure/hh180155.aspx

 

 

 

 


Webhooks for Azure Alerts – Creating a sample ASP.NET receiver application

$
0
0

Microsoft Azure recently announced support for webhooks on Azure Alerts. Now you can provide an https endpoints to receive webhooks while creating an alert in the Azure portal.

Webhooks are user defined HTTP endpoints that are usually triggered by an event. Webhooks allow us to get more out of Azure Alerts. You can specify a HTTP or HTTPS endpoint as a webhook while creating or updating an alert on the Azure Portal.

In this article I will walk you through creating an sample application to receive webhooks from Azure Alerts, configure an Alert to use this endpoint and test the overall flow.

Create a Receiver Application

Open Visual Studio 2015 and create a New ASP.Net Web Application

 

[Figure  1]

Select the Empty template from the available ASP.Net 4.5 Templates and Check to add the Web API folders an core references as below.

[Figure 2]

Add the Microsoft.AspNet.WebHooks.Receivers.Azure Nuget package. Don’t forget to check Include prerelease if you can find this package in the search results.

[Figure 3]

After installing the nuget package add the the below line to the Register method in WebApiConfig class.

config.InitializeReceiveAzureAlertWebHooks();

You can add the above code after the routing code as shown in Figure 4.

[Figure 4]

This code registers your webhooks reciever. 

Next step is to add the below application setting to your web.config file. This setting adds the secretkey to validate that the WebHook requests indeed Azure Alerts. It is advisable to use a SHA256 hash or similar value, which you can get from FreeFormatter Online Tools For Developers. This secret key will be part of the Reciever URL provided in the Azure Portal while creating the Azure Alerts.

<appSettings>
<add key="MS_WebHookReceiverSecret_AzureAlert" value="d3a0f7968f7ded184194f848512c58c7f44cde25" />
</appSettings>

Next we need to add handlers to process the webhooks data sent by Azure Alerts.

Add a new class AzureAlertsWebHooksDataHandler and add the below code to it.

using System.Threading.Tasks;
using Microsoft.AspNet.WebHooks;
namespace MyWebhooksDemo1.App_Code
{
public class AzureAlertsWebHooksDataHandler : WebHookHandler
{
public AzureAlertsWebHooksDataHandler()
{
Receiver = "azurealert";
}

public override Task ExecuteAsync(string generator, WebHookHandlerContext context)
{
// Convert to POCO type
AzureAlertNotification notification = context.GetDataOrDefault<AzureAlertNotification>();

// Get the notification name
string name = notification.Context.Name + " -- " 
+ notification.Context.Timestamp.ToFileTime().ToString();

return Task.FromResult(true);
}
}
}

This is the most basic handler. In the construct we have initialized the Reciever to handle only Azure Alert webhooks. The ExecuteAsync method is the one which is responsible for processing the data posted and return response back to indicate webhooks was received.

We will now expand this code to actually process the data received in the webhooks. Let’s store the data posted by the Azure Alerts webhooks sender in Azure table storage.

To do this first add the WindowsAzure.Storage nuget package and add the below code to import the Windows azure storage namespaces required here.

using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Auth;
using Microsoft.WindowsAzure.Storage.Table;
using System.Configuration;   //To read connectionstring from the config files.

Also add your Azure storage connection string in the application settings as below.

  <add key=”StorageConnectionString” value=”DefaultEndpointsProtocol=https;AccountName=your-account-name;AccountKey=your-account-key” />

And add the a small TableEntity implementation as below to store data in Azure table storage.

public class WHEntity : TableEntity
{
        public WHEntity(string Receiver, string Name)
        {
            this.PartitionKey = Receiver;
            this.RowKey = Name;
       
        public WHEntity() { }
        public string FullData { get; set; }

Finally lets modify the ExecuteAsync method to process the data send by the Webhooks sender and store it in Azure Table storage as below.

public override Task ExecuteAsync(string generator, WebHookHandlerContext context)

{

            // Convert to POCO type

            AzureAlertNotification notification = context.GetDataOrDefault<AzureAlertNotification>();

            // Get the notification name

            string name = notification.Context.Name + ” — ”

+ notification.Context.Timestamp.ToFileTime().ToString();

            WHEntity wHEntity1 = new WHEntity(this.Receiver, name);

            wHEntity1.FullData = context.Data.ToString();

            // Retrieve the storage account from the connection string.

            CloudStorageAccount storageAccount = CloudStorageAccount.Parse(

                        ConfigurationManager.AppSettings[“StorageConnectionString”]);

            // Create the table client.

            CloudTableClient tableClient = storageAccount.CreateCloudTableClient();

           

            CloudTable table = tableClient.GetTableReference(“azurealertdemo”);

            table.CreateIfNotExists();                    

            TableOperation insertOperation = TableOperation.InsertOrReplace(wHEntity1);

            table.Execute(insertOperation);

            return Task.FromResult(true);

}

The data sent in by the WebHooks sender is stored in JSON format, in the Data field of the WebHookHandlerContext object which is passed in as a parameter to the ExecuteAsync method. In the above method, I’m converting it to string and storing in Azure Table storage.

Now publish this code to an Azure Website. After publishing you can use the below URL to configure Azure Alerts to send Webhooks to the receiver we created above.

https://<host>/api/webhooks/incoming/azurealert?code=d3a0f7968f7ded184194f848512c58c7f44cde25

Note:
The Code in the above URL is the same as the secret key we have configured in the application settings.

Configure webhooks for Azure Alerts

Now Log in to the new Azure portal to configure an Azure alert to send Webhooks to the receiver we created above.

Browse and select a resource for which you want to configure the alerts. For simplicity lets create an alert for the above webhooks reciever Azure website we created.

Create a new alert(Webhooks currently supported on metric alerts only), and provide your webhooks reciever URL in the WebHooks field as below.

[Figure  5]

Verify the Results:

Configure the alerts help you verify the results quickly. You can accomplish this by keeping the Threshold and the Period to the minimum. I have set the Period to 5 Minutes in the above example. Hence, after 5 minutes if the threshold is reached, an alert is fired and webhooks posted to our receiver URL. This data is then processed and stored to Azure table storage as below.

[Figure 6]

Sample JSON object posted by the Azure Alerts Webhooks is as below.

{

  “status”: “Resolved”,

  “context”: {

    “id”: “/subscriptions/<your-subscriptionId>/resourceGroups/webhooksdemo1/providers/microsoft.insights/alertrules/webhooksdemo”,

    “name”: “webhooksdemo”,

    “description”: “webhooksdemo”,

    “conditionType”: “Metric”,

    “condition”: {

      “metricName”: “Requests”,

      “metricUnit”: “Count”,

      “metricValue”: “1”,

      “threshold”: “1”,

      “windowSize”: “5”,

      “timeAggregation”: “Total”,

      “operator”: “GreaterThan”

    },

    “subscriptionId”: “<your-subscriptionId>”,

    “resourceGroupName”: “webhooksdemo1″,

    “timestamp”: “2015-10-14T09:43:20.264882Z”,

    “resourceName”: “mywebhooksdemo1″,

    “resourceType”: “microsoft.web/sites”,

    “resourceId”: “/subscriptions/<your-subscriptionId>/resourceGroups/webhooksdemo1/providers/Microsoft.Web/sites/MyWebhooksDemo1″,

    “resourceRegion”: “East US”,

    “portalLink”: “https://portal.azure. com/#resource/subscriptions/ <your-subscriptionId>/resourceGroups/webhooksdemo1/providers/Microsoft.Web/sites/MyWebhooksDemo1″

  },

  “properties”: {}

}

Alternatively you can also use the Fiddler request composer to post to you webhooks Receiver URL and check the response the corresponding updates in the Azure Table storage. Make sure that the content-type is marked as json and the request body has json similar to the above example. A fiddler request should look like the below example.

[Figure 7]

Note:
Webhooks are internally configured to retry a few times until they receive a successful response from the receiver within a short duration. Hence you might see multiple requests hitting an endpoint in the ExecuteAsync method if you are debugging it remotely.

References:

Receive WebHooks from Azure Alerts and Kudu (Azure Web App Deployment) by Henrik F Nielsen
http://blogs.msdn.com/b/webdev/archive/2015/10/04/receive-webhooks-from-azure-alerts-and-kudu-azure-web-app-deployment.aspx

Introducing Microsoft ASP.NET WebHooks Preview by Henrik F Nielsen
http://blogs.msdn.com/b/webdev/archive/2015/09/04/introducing-microsoft-asp-net-webhooks-preview.aspx

Webhooks for Azure Alerts
https://azure.microsoft.com/en-us/blog/webhooks-for-azure-alerts/

How to configure webhooks for alerts
https://azure.microsoft.com/en-us/documentation/articles/insights-webhooks-alerts/

Error “Access to the path ‘E:sitesrootWeb.config’ is denied” when storing Azure AD’s public key in Web.config of an Azure Cloud Services application.

$
0
0

I have worked on a scenario where a Web Role application which had been working fine for a long time just started throwing the error “Access to the path ‘E:\sitesroot\0\Web.config’ is denied” without any change or update to the deployment:

  


 
  

Looking at the error, it’s a bit clear that for some reason, the Application Pool identity doesn’t have some specific access to web.config file. But if we didn’t make any change to the deployment a few questions start coming into play:

  1. What is the default Application Pool Identity account for a Web Role?
  2. What permission access does this account need now? 
  3. Why just now?

These are very good questions and we I will answer one by one:

1)    For Azure Cloud Services Web Roles, the default Application Pool Identity account is “Network Service”

  
  
  
  

2)    In a normal basis the Application Pool account needs read permission over the web.config file so it can read all the application configuration. However, looking into the Security info for this config file inside the instance, we can see Network Service already has read access.

 

  
So, what else does this account need? In this specific case, after we analyzed this web.config content we found a block which looks like the following:

 <issuerNameRegistry type=”System.IdentityModel.Tokens.ValidatingIssuerNameRegistry,System.IdentityModel.Tokens.ValidatingIssuerNameRegistry”>

  <authority name=”https://sts.windows.net/ec4187af-07da-4f01-b18f-64c2f5abecea/”>

    <keys>

      <add thumbprint=”3A38FA984E8560F19AADC9F86FE9594BB6AD049B” />

    </keys>

Note: The above block was taken from the article Important Information About Signing Key Rollover in Azure AD.

 

This means the application has a code that writes updated Azure AD’s Keys into web.config file and this operation requires NETWORK SERVICE account to have WRITE permission in the web.config file. If you are not familiar with Azure AD’s public Keys, please see Overview of Signing Keys in Azure AD.

Note: It is recommended that your application cache these keys in a database or a configuration file to increase the efficiency of communicating with Azure AD during the sign-in process and to quickly validate a token using a different key.

Now that we know what is causing, we can manually go into the security tab in the web.config properties and manually give write permission to NETWORK SERVICE account and the application will start working again.

 

 

3)    Answering the last question, according to this article the code updates the web.config only when there is a change to the certificates. This is probably the first time the code got executed trying to update the web.config file and ran into the issue.

“Once you have followed these steps, your application’s Web.config will be updated with the latest information from the federation metadata document, including the latest keys. This update will occur every time your application pool recycles in IIS; by default, IIS is set to recycle applications every 29 hours. For more context about this update, see Adding Sign-On to Your Web Application Using Azure AD.”

This explain why just now the application ran into the issue, and we also know a workaround for that. However, we must not forget we are working on Azure PaaS Cloud Services and that it is “stateless” which means this manual change will disappear sometime. So what do we do? In this case, the best thing to do is to create a startup task that gives NETWORK SERVICE account write permission in the application web.config file. Please follow the steps below to get this done.

 

Creating a Startup Task to give write permission to Network Service in the application Web.config file

  

1)    We first need to have the right command line that can get the above task done, and here it is:

 

icacls E:\sitesroot\0\Web.config /grant “NT AUTHORITY\NetworkService”:(w)

 

Note: You can also test the above command inside of the instance and make sure it’s working. To have more context of the command “ICACLS” review here.

 

2)    Create a cmd file named “manageacl.cmd” with the command from step 1 as its content. (you can name it whatever you want, you will use this file name in the next step)

 

3)    Right click in your application project in Visual Studio and choose “Add Existing Item…” and add the manageacl.cmd file created in the previous step.

 

 

Note: Set the “Copy to Output Directory” property of the cmd file to “Copy Always”, otherwise the file will not be copied to the package when you publish it.

 

 

4)    Add the following to the ServiceDefinition.csdef file under the WebRole:

 

<Startup>

      <Task commandLine=”manageacl.cmd” executionContext=”elevated” taskType=”background” />

</Startup>

 

Note: We are using taskType “background” because we need the role to be deployed in order to have the web.config file in the E:\sitesroot\0\ directory. If we use taskType=“Simple” the role will not start until this command to run.

 

5)     Publish

 

After the steps above, you can RDP to your instance and check the security property of the web.config file and you will see that NETWORK SERVICE now has write permission.

Azure Cloud Service package gets automatically deleted after Azure Account gets suspended/disabled

$
0
0

You may get into a situation when you have some kind of issue with your Azure Subscription (e.g. you have reached the spending limit, issues with the credit card and etc.) and your account will get suspended/disabled. Right after this issue gets fixed, you notice your PaaS Cloud Services are all empty, without any deployments and the packages are gone. You may also notice your IaaS VMs are stopped but you are able to simply start them again within seconds. So the big questions are, why are my deployment packages gone and how do I get them back to my Cloud Service? See the following points to get the explanation for these questions:

 

Cloud Services packages are gone:

When Azure Accounts are disabled, by default, all the PaaS Cloud Services deployments packages are also deleted for a few reasons:

  • Since PaaS VMs are stateless and it’s not possible to de-allocate them as we do with IaaS VMs, which means that, by shutting them down won’t prevent customer for being billed by compute hours. Deleting the Deployment packages prevent the disabled accounts to have additional compute hours billed.
  • Only the package (.cspkg) and the its configuration file (.cscfg) are uploaded to Azure and these source files would still be with the developers

 

What would be the Solution?

Given that the package has been uploaded to Azure in the deployment phase, the short term solution is to have the packages uploaded again to the Cloud Services (cspkg and cscfg). This way, Azure will recreate the deployment the same way it was before and the applications will be up again within minutes.

 

What if for some reason I don’t have the packages anymore?

For every deployment made to Cloud Services, Azure stores the related package files (cspkg and cscfg) in an internal Azure Storage Account (where only Azure has access to) for a few days. Given that, we have an internal process for retrieving the packages and send them to the customer’s storage account. For this, you have to open a ticket with Azure Technical Support team and provide the following information:

 

Note: The Deployment ID is the most important information in this process since this is the only way we can identify the related package files internally. The process can’t be completed without this information.

 

In case you don’t know where to find the deployment ID for a deleted Deployment, here are some tips to find it out:

  • Check the Operations History for your Subscription in the Azure Portal and look for some operation made in the Cloud Service you want (staging or production slot) and get the Deployment ID from the operation details. For this login in the Azure Portal (manage.windowsazure.com) go to “Management Services | Operation Logs”

img1

img2

  • In Visual Studio’s Server Explorer, under storage accounts you may find storages accounts associated with your cloud services and in their tables there may be some Log tables that contain the Deployment IDs.
  • Ask for the Engineer who owns the case to look for any Operation ID from the Cloud Service and get the Deployment ID from Azure internal logs.

 

Note: Azure only keeps the Operation History for 90 days. So, any operation before it won’t be found and if there’s no operation in this time range for the Cloud Service we won’t be able to find out the Deployment ID.

 

Understanding CPU metric data from Azure Cloud Services.

$
0
0

In this article we will learn how to interpret the CPU metric in both the Azure Portal as well as in the Windows Azure Diagnostic (WAD) tables and understand the differences between data in WAD tables and in Azure Portal. We have focused on the CPU as an example, but the same information can be used for other metrics as well.

Also, we are start from a point where we consider you have already gone thru How to Monitor Cloud Services and followed the steps.

 

Note: CPU usage as well as Data In, Data Out, Disk Read Throughput, and Disk Write Throughput are all captured by default even without enabling Azure Diagnostics (previously called WAD)

 

Let’s take a look at the following image which shows the Azure Portal Dashboard in the Monitor tab for Cloud Service instance called “WebRole1_IN_0″ and the time zone for the Portal Screenshot is UTC-3.

 

 

CPU-image1

 

If we check in this dashboard and put the mouse pointer over 11:45am (14:45pm UTC) we can see the CPU Percentage [Avg] = 2.13% and over 12:00pm (15:00pm UTC) we can see the CPU Percentage [Avg] = 0.6%:

CPU-image2

 

CPU-image3

 

If we go to the Storage Account which is set in the WAD configuration and check the table “WAD[DeploymentID]PT5MRITable” (This table has performance counter data for 5 minutes aggregation)we see different values for total, minimum and maximum from the same counter (Screenshots from the same timestamp as the two images above respectively):

 

Note: In order to have performance counters data stored in WAD tables inside your Storage Account you must have Diagnostics (WAD) and Verbose monitoring enabled for your Role, otherwise you will only have the minimal metrics (CPU Percentage, Data In, Data Out, Disk Read Throughput, and Disk Write Throughput) available for the Azure Portal Dashboard only. See how to Configure monitoring for cloud services.

CPU-image4

CPU-image5

 

 

Note: Timestamps in WAD tables are about data from between this timestamp and the previous one and in the Portal Dashboard from between the timestamp and the one after.

 

So, what are those data about and why are they different? Let’s analyze the second timestamp mentioned, which is between 12:00pm – 12:05pm UTC-3 (15:00pm – 15:05pm UTC).

 

Analysis:

 

The metric was taken twice in this time range of around 5 minutes (where the lower usage (minimum) of the two collections was 0.11647% and the other collection was the higher (maximum) with 0,116721% usage.

CPU-image6

 

However, in the portal the data is the same, but we see a different presentation of it.

CPU-image7

 

When we put the mouse pointer on any of the graphic points we can see the “percentage usage average” for the next 5 minutes, which means that what we see in the dashboard is a calculation result from performance counter data from role instances in that specific time range. In this case, the CPU had a usage average of 0.6% from 5/5/2016 12:00 PM – 5/5/2016 12:05 PM UTC-3 or 5/5/2016 15:00 PM – 5/5/2016 15:05 PM UTC. See more details from How to Monitor Cloud Services:

 

By default performance counter data from role instances is sampled and transferred from the role instance at 3-minute intervals. When you enable verbose monitoring, the raw performance counter data is aggregated for each role instance and across role instances for each role at intervals of 5 minutes, 1 hour, and 12 hours. The aggregated data is purged after 10 days.

 

For the bottom part of the “Monitor Tab” we have the following data for counters:

 

  • Name: Name of the Metric
  • Source: Where the Metric is being taken from.
  • Min: The Minimum usage average percentage (the lower value) for the whole dashboard period being presented. In this case “1 Hour”
  • Max: The Maximum usage average percentage (the higher value) for the whole dashboard period being presented. In this case “1 Hour”
  • AVG: The Average usage percentage for the whole dashboard period being presented. In this case “1 Hour”
  • TOTAL: The total value for the whole dashboard period being presented (available for some metrics only). In this case “1 Hour”
  • Alert: If you have any alert created for the specific Metric.

CPU-image8

 

Conclusion: We are able to see the CPU metric as the examples above, in the Azure Dashboard as well as in WAD tables in the Storage Account (if Monitoring set to “Verbose”), however, the Metric data in WAD tables is about pictures of the performance counter data from the role and aggregated intervals of 5 minutes, 1 hour, and 12 hours, in the other hand, data in the Azure Portal Dashboard is about the same data, calculated and presented as average. So, both come from the same place, however, they are presented in different ways.

 

Source:

https://azure.microsoft.com/en-us/documentation/articles/cloud-services-how-to-monitor/

Azure Emulator Crash with error 0x800700b7: Cannot create a file when that file already exists

$
0
0

Sometimes when you are using Visual Studio and working on some Azure projects, you might hit an issue which cause your Azure emulator to crash.

When that happens, you will get the System.Runtime.InteropServices.COMException (0x800700B7): Cannot create a file when that file already exists. (Exception from HRESULT: 0x800700B7)

Error

You can try also manually to run the compute emulator instead of use Visual Studio, the command you need to run is like the next one below:

csrun /devfabric /usefullemulator

And from the command window, you can see that the compute emulator is started.

csrun1

However, using same csrun command, you can check the current status of the emulator, by running the command csrun /status, and you will see that the emulator is not running.

csrun2

You can check the DFService.log file that is generated by the emulator, those logs are located in next folder path:

C:\Users\<user>\ appdata\local\dftmp\DFServiceLogs

In the DFService log file, you can see there the same exception that is reported by Visual Studio (while trying to run the emulator) or by running the csrun command (to run manually the emulator.

 

DFService Information: 0 : [00003520:00000001, 2016/02/17 23:36:54.436]==============================================================================================================================

DFService Information: 0 : [00003520:00000001, 2016/04/17 23:36:54.436] Started: “C:\Program Files\Microsoft SDKs\Azure\Emulator\devfabric\DFService.exe” -sp “C:\Users\YYY\AppData\Local\dftmp” -enableIIS -singleInstance -elevated

DFService Information: 0 : [00003520:00000001, 2016/04/17 23:36:54.482] Exception:System.Runtime.InteropServices.COMException (0x800700B7): Cannot create a file when that file already exists. (Exception from HRESULT: 0x800700B7)

at System.Runtime.InteropServices.Marshal.ThrowExceptionForHRInternal(Int32 errorCode, IntPtr errorInfo)

at System.Runtime.InteropServices.Marshal.ThrowExceptionForHR(Int32 errorCode, IntPtr errorInfo)

at Microsoft.WindowsAzure.GuestAgent.EmulatorRuntime.EmulatorRuntimeImpl.Initialize(String runtimeConfigIniFile, String serviceName, String rootPath, String logFilePath)

at Microsoft.ServiceHosting.Tools.DevelopmentFabric.Fabricator.InitializeEmulatorRuntime()

at Microsoft.ServiceHosting.Tools.DevelopmentFabric.Fabricator.InitializeRuntimeAgents()

at Microsoft.ServiceHosting.Tools.DevelopmentFabric.Fabricator.Initialize()

at Microsoft.ServiceHosting.Tools.DevelopmentFabric.Program.Main(String[] args)

 

Now, the issue has been identified, how to mitigate it?

First, I suggest you to check the next great support blog:

https://blogs.technet.microsoft.com/supportingwindows/2014/08/11/wmi-missing-or-failing-wmi-providers-or-invalid-wmi-class

Then after you have checked the blog post, you need to check which one is the missing or failing WMI class, by following the next steps:

  1. Go to start-run and type in wmimgmt.msc
  2. Right click on Local WMI Control (Local) and select properties.
  3. On the general tab, if there are any failures noted on that box, that indicates a core WMI issue.
  4. Found the .MOF files for Win32_Processor namespace/class

For this case, I saw that there were some WMI invalid classes:

  • Win32_Processor
  • Win32_WMISetting

wmierror

  1. Repair the MOF file by running mofcomp.exe <MOFFilename.MOF>. The mofcomp.exe is located in the C:\Windows\System32\wbem folder.
  2. And then re-register the associated DLL by running the command regsvr32 <MOFFilename.dll>

fixIssue

  1. Verify if it is fixed or not by checking the WMI Control (wmimgmt.msc) again. This time, as you can see in the image below, there are no more WMI class erros.

wmifixed

  1. Then, re-launch the Emulator, and this time you will see the emulator to run again, with no issues/crash this time.

I want to thanks Wayne for his great and deep knowledge for Visual Studio.

You can now still enjoying Azure !!

 

Viewing all 35 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>