Quantcast
Channel: Kent Weare's Integration Blog
Viewing all 75 articles
Browse latest View live

SAP calling RFC hosted in BizTalk

$
0
0

If you have been following this blog you have most likely encountered a few blog posts related to SAP integration.  I also wrote two chapters related to SAP integration in a BizTalk book last year.  In all of the journeys that I have taken with SAP, not once have I encountered a situation were I needed to receive a request from SAP that required a synchronous response.  All of my previous experience required asynchronous messaging with SAP when receiving data from SAP.  We now have a scenario that requires SAP to send BizTalk a “fetch” request and BizTalk needs to provide this information in near real-time.  What was very interesting about this scenario many people within our organization didn’t think, or know it was possible.  I guess it was tough for some to wrap their head around the concept of SAP actually needing information from another system since it tends to be a System of Record.

SAPRFC.INI HUH???

A good starting point for this scenario is in the MSDN documentation.  Initially I thought it would be pretty straight forward and would resemble a situation where BizTalk receives an IDOC.  That was until I received an error similar to the one below indicating that an saprfc.ini file could not be found when enabling the SAP Receive Location.

Log Name:      Application
Source:        BizTalk Server
Date:          3/6/2012 7:12:53 PM
Event ID:      5644
Task Category: BizTalk Server
Level:         Error
Keywords:      Classic
User:          N/A
Computer:      SERVER

Description:
The Messaging Engine failed to add a receive location "Receive Location2" with URL "sap://CLIENT=010;LANG=EN;@a/SAPSERVER/00?ListenerDest=ZRFCADD&ListenerGwServ=sapgw00&ListenerGwHost=SAPHOST&ListenerProgramId=Biztalk_Z_Add&RfcSdkTrace=False&AbapDebug=False" to the adapter "WCF-SAP". Reason: "Microsoft.Adapters.SAP.RFCException: Details: ErrorCode=RFC_OK. ErrorGroup=RFC_ERROR_SYSTEM_FAILURE. SapErrorMessage=Open file 'C:\SAPINI\saprfc.ini' failed: 'No such file or directory'.  AdapterErrorMessage=Error accepting incoming connection. RfcAccept returned RFC_HANDLE_NULL..
 

This really confused me since I could successfully connect to SAP and receive IDOCs.  After some digging, I discovered the following webpage that indicated: “The RFC library will read the saprfc.ini file to find out the connection type and all RFC-specific parameters needed to connect to an SAP system, or to register an RFC server program at an SAP gateway and wait for RFC calls from any SAP system.”

So how do we solve this?

  • The first thing that we need to do is to create a System Variable called RFC_INI.  We then need to provide a path and a filename.  For the purpose of my example I used C:\SAPINI\saprfc.ini.

image

  • Next we need to add the contents to our saprfc.ini file.  The values that I needed to provide include:
Field nameValue 
DESTZRFCADDIn this case, this is the name of the RFC Destination that our BASIS team created for us from SM59.  More details here.
TYPERType R is for RFC server programs or for a client program working with another external program as RFC server program which is already registered at an SAP gateway.
GWHOSTSAP_HOST_NAMEIn my case, this is the name of the physical server that his hosting the SAP Gateway.
GWSERVERSAP_GATEWAY_NAMEThe name of the SAP Gateway.  A standard naming convention is: SAPGW## where ## is the system number for the SAP instance that you are working on.
PROGIDBiztalk_Z_AddThis is the name of the Program ID that has also been provided by BASIS.

 

So if we compare the details between our Receive Location and our saprfc.ini we will see symmetry between the two.  However, the values in the ini file take precedence.

image

  • Now that we have have our SAPRFC.ini file and SAP Receive location in order we can run a connected client test using SM59.  To perform this test, launch the SM59 transaction and then expand the TCP/IP Connections node.

image

  • Scroll through the list to find your RFC Destination.  In my case, I am looking for ZRFCADD.  Double click on this value.

image

  • Click on the Connection Test button to execute the test.

image

  • You should see a successful Connection Test.  If not, there is no point trying to call your RFC from SAP until you get this connection issue resolved.  If SAP can’t perform this “ping test” it won’t be able to actually send you data.  To troubleshoot this you will need to ensure that the values that you have in your receive location/ini file match the values that are defined in SM59.  In most cases you will need to rely upon your BASIS Buddy to help you out.  As I mentioned in my book, I do have a good BASIS buddy so this troubleshooting usually goes smoothly.

image

  • With our connectivity issues out of the way, we can browse for our desired schemas using the BizTalk Consume Adapter Service Wizard and then add them to our solution.

image

  • We can now build out the rest of our application.  In my case I have a very simple map that will add two values that occur in the request message and provide the sum in the response message.

image

  • The only special consideration that we need to take care of is to set the WCF.Action in the Response message.  We can do this inside a Message Assignment shape.  If you don’t take care of this, you will receive a run time error.

sapResponse(WCF.Action) = "http://Microsoft.LobServices.Sap/2007/03/Rfc/ZISU_RFC_ADD/response";

We now are in a position to deploy our application and start receiving requests from SAP.

Conclusion

Overall the process was pretty similar to receiving IDOCs with the exception of the INI file and the WCF Action property requiring being populated.  Performance is similar to receiving or sending IDOCs so you won’t take any additional performance hits.


Testing your RFC hosted in BizTalk from SAP

$
0
0

In a recent blog post I discussed how you would host an RFC in BizTalk that SAP could call.  I thought it would be helpful if I also included a post that would discuss how you can test your BizTalk Hosted RFC.  Like most BizTalk Developers, I wanted to be able to perform some tests on my own without bothering our SAP team.  So by following these steps I was able to perform some tests on my own.

NOTE: Performing the following steps will require developer permissions in SAP. I suspect that with this kind of access I could get myself in all kinds of trouble.  Luckily for me, our SAP Security team didn’t ask me too many questions when I requested access.  Smile

 

Testing

  • Execute the SE37 “ABAB Function Modules” transaction.
  • Type in the name of your Function Module.  In my case it is ZISU_RFC_ADD and then click the Test/Execute button highlighted in Red or press F8

image

  • We now have the opportunity to provide our inputs.  If you recall from my previous post, this RFC will accept two integer inputs and then provided the sum in the response message.  Once we have our two inputs populated we can click on the Execute button (highlighted in red) or click the F8 button.
image

 

  • Our result will now be displayed on screen and we will discover how long it took to get the result.

image 

Exceptions

So what happens if BizTalk is not available when you perform one of these tests?  I was curious as well so I simply disabled my receive location, executed my test and received the following result.

image

Its not pretty but it describes that the “program BizTalk_Z_Add” is not registered.  This is saying that our listener that is using this Program ID is not online. This is a similar error message that we receive during a failed Connected Client test from SM59.

Conclusion

SAP can be a very large and daunting system at times but the more and more I get into it the more comfortable  I feel about its ability to integrate with other systems.  Tools like this one (SE37) and the IDOC resubmit transaction(WE19) can be a BizTalk dev’s best friend when integrating with SAP.

2012 Canadian Leadership Summit–Day 1

$
0
0

Every year Microsoft invites some key customers down to Redmond to see the latest and greatest technology.  The Summit is geared towards IT Leadership so unfortunately this time around  I will not get to hear Clemens Vasters speak about Service Bus during this trip to Redmond.

In the welcome session, they discussed the following

  • This event continues to grow has 50% more attendees than last year
  • Themes
    • Saving Money and Gaining efficiencies
    • Drive innovation
    • Grow our Business
    • Support changing User expectations
  • Samsung has the exclusive rights to Surface
  • New President of Microsoft Canada: Max Long

Sessions

 

Business of the Future (Dynamics AX and CRM)

Dynamics AX 2012 - Business Workloads and Suites

In this session we got the “state of the nation” when it comes to Microsoft’s Dynamics unit.  More specifically it related to Microsoft Dynamics 2012 and Dynamics CRM 2011.

A very common theme was around Workloads.  Workloads meaning processes and where those processes take place.  In some cases those Workloads may take place on-premise where as others may take place in the Cloud.  Also some Workloads may be supported natively by Dynamics AX 2012 and some may be supported by an ISV product.

Microsoft has been investing heavily in Workloads.  Below is a list of various Workloads that they support.  Some may be industry specific and others may be common within organizations (Expenses)

    • Business Workloads
      • Industry Operational Workloads
        • Retail
        • Manufacturing
        • Distribution
      • Horizontal Operational Workloads
        • HCM
        • Project
        • Budget Formulation
        • Expenses
        • SRM
        • Sales Force Automation
        • Customer Car
        • Marketing Automation
      • Administrative “System of Record”
        • HR
        • Finance

Dynamics AX Adoption

  • Great momentum in Retail sector for Dynamics
  • Microsoft had a clear industry focus for this release
    • Manufacturing
      • Food and Beverage
      • Chemical
    • Public Sector capabilities were introduced
    • Professional Services capabilities were introduced
  • Acquiring or Building Intellectual Property to support more industry specific solutions
  • Are now able to bring processes into a unified solution

Eating their own Dog Food

Microsoft runs on Dynamics AX

  • Got rid of Seibel
  • XBOX manufacturing is using Dynamics AX
  • Expenses

Major Customers in Canada

    • Royal Canadian Mint
    • Pretro Canada
    • Teck
    • Subaru
    • Techo-Bloc
    • Wakefield
    • Cordy
    • Grics
    • David’s Tea

Adding more core, industry specific capabilities is a good idea in my opinion.  Having spent a fair amount of time with SAP solutions, it is quite evident that SAP puts an emphasis on industry specific solutions like the Utilities module: ISU.  I always felt that Microsoft relied too much on 3rd party applications to fulfill these verticals.  I am happy to see them put more emphasis on industry.

 

CRM

The next portion of the session focused on CRM 2011.  They showed an amazing demo where they built a Metro user interface on top of CRM 2011.  The idea behind this was that this company (a beer company) has a mobile workforce that includes “Beer Rangers” (how cool of a job title is that).  These Beer Rangers are much like account managers.  They need to access CRM to manage their client engagements.  Previously, this Brewery had issues with CRM adoption.  They found that the Beer Rangers were not using CRM as much as they should have.

The user interface was extremely fluid.  You would not even know that this was a CRM system unless someone told you it was.  Since the target audience was a mobile workforce, this demonstration was done on a Windows 8 tablet. 

I am responsible for managing our CRM implementation at my organization.  My initial thoughts were that “I want that solution”.  It was amazing.

 

Neat Statistics

  • In the last 6 months Microsoft has added 250k users
  • 2.25 million users total
  • 33k customers world wide
  • 31 double digit growth quarters
  • 60% of new customers use CRM Online

Major Customers in Canada

  • Hydro One
  • Globe and Mail
  • Children’s Wish
  • Big Brothers/Big Sisters
  • Paladin
  • Starshot
  • Legal Aid Alberta
  • Tourism Whistler

 

A Perspective on Cloud Computing & Adoption – Steve Martin

Steve delivered a very practical presentation on Windows Azure.  I have been to many Azure sessions that talked about how everyone needs to be in the cloud and made it seem like you were on the outside if you weren’t.  From the beginning, Steve mentioned that we would not see one Windows Azure logo in his presentation and he was right.  He gave a very forthcoming, honest talk on when and why you should use cloud.  He also provided a lot of candid information in areas when you should not use the cloud as it just does not make sense.

When should I embrace the cloud?

Economics

In the majority of cases you will not save money by moving a workload to the cloud. (Yes someone from Microsoft actually said this)  From Microsoft’s perspective they have seen customers save money when using Azure for:

  • Dev/Test scenarios
  • Temporary workloads
  • Bursts
  • Proof of Concepts – You can perform many tasks over the course of an afternoon for less than a cup of coffee.

He then offered that you will not save money in situations where:

  • You have sustained (long term) utilization.  The cost of compute is still too high for financial benefits.

Architecture

People will move to the cloud to take advantage of Architecture building blocks that either do not exist on –premise or is outside that organization’s core competency.  From a personal perspective, this has always resonated with me. The pure elasticity of Azure is just not something that can be easily emulated within your own datacenter.  Also,  when I look at some of the opportunities that technologies like the Service Bus provide, it just makes sense to move some of these workloads into the cloud.

 

Who is adopting?

  • Startups
    • Are deferring capital expenses until they have reached the scale where spending the capital makes sense
  • 32 of Global 100 are using some form of Azure
    • Mixture of  DEV/TEST
    • Pilots
    • Some production

Cloud Adoption Patterns

  • Publicly facing applications
  • Applications that move between Public and Private
  • Unpredictable & Variable Workloads
  • Application Development (DEV/TEST)
  • Temporary initiatives
  • Sizing & Tuning for investment

Neat Statistics

  • In the past 5 hours(from the time I arrived at the conference center to the time of writing):
    • 6.49 million cloud compute hours were consumed
    • 12 158 new virtual machines have been spun up
    • Azure has the same amount of compute power as the entire universe had in 1999

 

Cloud is a double edge sword

You know the term “it takes money to make money”?  For many years companies that could afford to spend a lot of money on R&D and infrastructure had more opportunities to create the next big thing.  With Azure, and cloud computing in general, the playing field has now been leveled. Your competitors now have access to the same toolset that you have access to.  These toolsets will now allow businesses to scale at levels that previously just weren’t possible.  In fact, Professor Richard Foster of Yale University is predicting by by 2020 more than 3/4 of the S&P 500 will be companies that we have not heard of yet.  Considering where Facebook came from less than 10 years ago to approaching a 100 Billion dollar IPO, I agree with Professor Foster.

 

So that is some of the highlights from Day 1.  I will also publish my thoughts from Day 2 on Thursday.

2012 Canadian Leadership Summit–Day 2

$
0
0

On day 2, the Summit contained breakout sessions that allowed us to dig into some of the topics that were introduced on the first day.  More specifically we were able to dive into topics such as Windows 8 and the Consumerization of IT, Dynamics AX 2012 and CRM “R8”.

Consumerization of IT

With worker demographics changing, employees are now placing new demands on organizations to use more modern technology or let them use their own.  For many recent College graduates, they don’t remember a time when there wasn’t an internet.  For many employees, they have more computing power and modern equipment at home than they have in the office.  These situations are creating headaches for Infrastructure managers, IT Directors and CIOs.

Windows 8 provides some tools that address some of the needs of this emerging demographic.

Windows 8 for the Enterprise

  • No compromise business tablet
    • The flexibility of a tablet with the productivity of a desktop
    • Picture Password -  No longer are you forced to remember some password with ridiculous password requirements.
    • Touch first experience – introducing touch into Microsoft’s latest Operating System is not an afterthought.  This Operating System was built with touch in mind.
    • “Always” Connected applications through live tiles.  No longer do end users have to open applications to determine whether they have received a new purchase order or be glued to their inbox for new alerts.  This information can be presented in form of a Live Tile much like you have on your Windows Phone 7.
    • When in “Windows 7 mode” applications behave the same way as in Windows 7.  Not all applications will be “Metro” ready at launch or any time soon after that.  If you have an application that will run on Windows 7, it will run on Windows 8.
    • You can dock “’Windows 8” applications beside “Windows 7” applications.  You know that feature in Windows 7 that allows you to dock applications side by side?  You can still do this in Windows 8 and can even doc a Windows 7 application with a Windows 8 application.
    • Tablets can be managed by existing infrastructure tools like SCCM
  • Innovative Devices
    • Touch
    • Long battery life
    • Thinner, lighter, faster footprints
    • Convertibles – want the traditional experience of a keyboard and mouse but the ability to detach your screen and use it as a tablet?  If so this functionality will available
    • Workers who require more durable footprints will have the ability to use ruggedized laptops.
  • Booting from a USB device
    • They showed a demonstration where they had a Windows 7 computer and they inserted a USB key that contained a Windows 8 corporate image.  They rebooted the computer and they were able to boot Windows 8 off of the USB drive.  They then showed a demo where they were playing a video from the Windows 8 computer.  They pulled the USB drive out from the physical computer.  The video freezed.  The presenter then plugged the USB drive back in and the video resumed.  If you plug the USB key in within 60 seconds life is good.  If you take longer than 60 seconds then the machine will be shut down.
    • This feature provides a lot of potential for people who want to bring their own device to work or where you have contractors who bring their own laptop but you want them to run your corporate Windows 8 image.
  • Enhanced Bitlocker support
    • Windows 7 introduced the ability to encrypt a USB key.  The problem was that you had to encrypt the entire volume of the drive.  If you had a larger USB key, like 32 GB or 64 GB, this operation took a long time.  New in Windows 8 is the ability to just encrypt the data instead of the entire volume.  This is a great balance between performance and security.
  • New Security features
    • New boot loader features will detect when OS files have been tampered with upon boot up.  If the boot loader detects malware, it will make a connection to obtain Anti malware drivers, load them and remove the malware.
  • Virtualization and management
    • VDI is a technology that allows organizations to operate a farm of virtualized windows clients.  Much like organizations can run virtualized server farms, they can also run virtual desktop farms.  Windows 8 offers a superior experience over Windows 7 VDI experience.
      • Scrolling over VDI is very fluid even when using Touch.  This is rather remarkable considering that the Server hosting the Windows 8 image has no hardware support for touch.
      • End users can pinch , zoom out and zoom in
      • Full fidelity – watching video is flawless over LAN and WAN configurations
      • USB re-direction allows you to plug in a USB device on local machine.  A signal then gets sent to the Host Server and is then rendered in the VDI session
      • Storage Pools allow administrators to manage a pool of disk instead of have hard quota set for each client
  • Windows 8 Market
    • With 525 million Windows 7 users, Microsoft has high expectations for Windows 8 adoption and plans to offer a Windows Marketplace that supports 200 locales
  • Consumerization of IT an Opportunity or Risk?
    • It can be both
    • Consumerization Device Scenarios
      • On your own (low control)
      • Bring your own (medium control)
      • Choose your own (Enterprise full control)
      • Here is your own (Enterprise Full Control
    • Microsoft has different policies depending upon the scenario
        • Classify devices and then provide the appropriate enforcements
    • Access Strategy
        • User Based: who are you (e.g.. No Access, read write, full control)
        • Device Based: How much to I trust the device (e.g. managed vs. unmanaged)
        • Location Based: Where are you? (e.g. intranet vs. Internet)

Dynamics 2012 AX

Prior to this event, I have not had a lot of exposure to Dynamics AX.  What became extremely evident is that Microsoft is very aggressive and committed to the ERP segment.  They are also not only interested in establishing a platform but are also interested in providing industry specific solutions.

 

Proactive Applications

  • Knowing where we have been is no longer enough, we now need to know what is going to happen next
  • End users need a UI based upon their role (Role Tailoring).  The CFO needs different info than the shipping clerk and the information better be populated on the front screen.
  • Workflow Inside
    • Can’t be a bolt on. Needs to be inherent in the application
  • Visualization
    • Use external data and make it apart of how you use internal data for forecasting.  For instance the weather having an impact on inventory positions.  We can use this data to compare patterns.  This will allow for additional insights from both internal and external data that has been collected.
  • Out of the Box integration with Dynamics CRM so that you have 1 view of the customer

Microsoft’s Approach

  • Simplicity and Agility
    • Business Processes are subject to change so quickly that your ERP needs to have more agility than it has had in the past
  • Cloud
    • CRM online or on-premise
    • This time next year you will be able to run AX in Microsoft’s cloud
  • Microsoft Technology
    • Dynamics is leveraging existing investments in other Microsoft technologies like:
      • Kinect
      • Office 365
      • Bing
      • Windows
      • SQL Azure
      • SharePoint
      • Windows Phone
      • Microsoft Lync

What is Dynamics 2012?

  • Core ERP
  • BI
  • Industry solutions

Cloud on your terms

  • Embrace Hybrid
  • Pay as you go and Grow
  • deliver choice

Other

  • Microsoft is working on a Metro screens of Dynamics 2012
  • Lines between AX and CRM are starting to blur
  • Around 2000 people actively work in the AX organization

CRM – What’s coming down the pipeline?

Big trends in CRM

  • Big Data
  • Social
  • Cloud
  • Mobile
  • Core CRM

CRM “R8”

  • More Mobility options
    • CRM on an IPAD will be released within the next few days
    • IPAD version has synchronization capabilities you can go online-offline.
  • Browser flexibility (support for other browsers like Firefox, Chrome, Safari)
  • Social
    • Partnerships with LinkedIn
    • Both at the company level but also at the contact level
    • News aggregation about your customers gets sucked in from external sources.
    • CRM users also have the ability to share information from CRM to LinkedIn, Twitter, Facebook, Email or the CRM Activity Feed Wall which is a internal “posting board”
  • Industry templates
    • Repetitive Intellectual Property that has been acquired from ISVs and Partners
    • Capital Markets/Wealth Management
    • HealthCare/Health Plans
    • Processed based Manufacturing
    • Certifications
      • Microsoft continues to be certify their offerings with industry standards

CRM Growth

  • CRM is one of the fastest growing business within Microsoft
  • Extensive Customer list across industries:
    • Financial Services
      • ING
      • Barclay
    • Professional Services
      • Volt
      • PointBridge
      • Hitachi Consultin
    • Manufacturing
      • Volvo
      • STAT Oil
      • Niko
    • Public Sector
      • City of London
      • Kent Fire and Rescu
    • Retail
      • Cold Stone
      • BestBuy
    • Health and life sciences
      • Pfizer
      • Novozyme
    • Travel and entertainment
      • Phoenix Suns
      • Portland Trailblazers
      • Toledo Mudhens
      • Arizona Diamondbacks

So this concludes the 2012 Canadian Leadership Summit.  Overall, there was some good sessions.  I really enjoyed seeing what is coming down the pipeline in areas of Windows 8, Dynamics AX and CRM.

Book Review: BizTalk Server 2010 Cookbook by Steef-Jan Wiggers

$
0
0

Recently I have found myself with some time to review a book instead of writing one. This time around it was a BizTalk book written by fellow BizTalk MVP Steef-Jan Wiggers.  Steef-Jan is no stranger in the BizTalk community.  He has been very active in the MSDN forums, reviewing and writing books and has been busy on the speaking circuit in the Netherlands, Sweden, Canada and more recently Italy.

Steef-Jan has a lot of BizTalk experience so it should come as no surprise that he has written a very practical book based upon a lot of real-world experience.  This experience is demonstrated constantly through-out the book and was very refreshing.

The book itself focuses on many areas including:

  • Setting up BizTalk Environments
  • BizTalk Automation Patterns
  • Instrumentation, Error Handling and Deployment
  • Securing Message Exchange
  • WCF Messaging with BizTalk
  • BizTalk AppFabric Connect
  • Monitoring and Maintenance
  • Rules Engine
  • Testing BizTalk Artifacts

So based upon these areas of discussion it is impossible to go into a tremendous amount of depth on each topic.  However, Steef-Jan will provide a practical example and then leave the reader with links to valuable resources where you can dive deeper if you wish.

One of my favorite topics in the book included the Instrumentation, Error Handling and Deployment chapter.  I feel that this chapter introduces some of the “tricks of the trade” that can really provide some benefit  to a BizTalk developer.  In this chapter you will discover some tools like the BizTalk Benchmark Wizard and NLog.  These are both valuable tools that we have included in our BizTalk deployment at my organization.

Another area of the book that I enjoyed was the Monitoring and Maintenance chapter.  More and more we are starting to hear about Monitoring and Maintenance.  Steef-Jan discusses tools that can really provide visibility into your BizTalk environment including SCOM, BizTalk 360 and MessageBox Viewer.

All in all the book was excellent.  I can’t say there was a topic that I didn’t feel was appropriate.  I think there are probably some areas that could definitely benefit by having more details included.  But, the book is a Recipe book and I think it delivers on what is promised in the book’s overview.  Congrats Steef-Jan on putting together a great book!

You can find the book on both the Packt and Amazon websites.

image

BizTalk Book Release: Microsoft BizTalk Server 2010 (70-595) Certification Guide

$
0
0

I received word from Packt Publishing today that our book has officially been published.  This book has been around 10 months in the making so it is very rewarding to see this book released.

The intent of this book is to prepare experienced BizTalk developers and administrators with the tools that they need to write the exam.  It was an interesting project and different from my last book adventure. In the BizTalk LOB book we were really interested in addressing a few core concepts within a particular chapter.  For instance in the SAP chapter, I emphasized on how to send and receive IDOCs.  This time around in the MCTS book we needed to address the requirements of the exam to ensure that we provided proper coverage.  I definitely feel that we did hit the areas that we needed to hit.

Another interesting aspect of the MCTS book was our existing NDA as we all wrote (and passed) the exam before starting out on our journey.  This book is not a  cheat sheet.  You won’t find the exact questions and answers on the exam.  Overall the book is around 467 pages so there is a lot of content to cover.  If you invest the time in reading the book and working through the examples and practice tests, I am confident that you will do well on the exam.

It was a pleasure working with the other authors: Johan Hedberg and Morten la Cour.  Writing books is not always a smooth process and Johan, as the lead author, did a great job keeping the book moving.  I had not worked with Morten before the project and appreciated the professionalism that he brought to the project.

Creating a polished product like this book takes more than just talented authors.  There is a core team that is working behind the scenes that really make these projects successful.  With that said I would like to thank the Packt team for their determination in getting this project to the printers.  I would also like to thank the very talented reviewers who held us in check and increased the quality of the book.  These reviewers include:

The book is available for purchase from both Packt and Amazon websites.

 

Book

 

What’s Next?

I have already been asked by a few people, what’s next?  Do you have another book coming out?  I do not have any immediate plans to write another book.  If a topic did come up that was extremely interesting I would consider it.    Writing is a very large time commitment and has been a very positive experience but I do want to spend some time reading(and learning new things)  instead of writing. 

Microsoft TechEd North America 2012–Day 1

$
0
0

Keynote

The Keynote started off like a Keynote from another technology company.  A DJ was spinning tunes and also showed off his new ‘turntable’ that runs on Windows technology and is all the rage these days (so he says).

 

 

WP_000280

Post DJ dialog, Satya Nadella stepped up to the mic and provided some insight into the state of computing inside and outside of Microsoft.  He provided the following:

  • We are at the shift of a new paradigm.  Much like at the dawn of Client-Server, we find ourselves in a position where we need to re-invent ourselves by leveraging new technologies such as the cloud.
  • Microsoft is focused on providing Services at internet scale
    • Microsoft currently runs some of the worlds biggest apps (Xbox live, Bing, Exchange, CRM)
    • feedback loop
    • global scale, 24 x7
    • ultra-modularity.
    • 16 major Microsoft Data centers around the world
    • Bing 300 petabytes of data
    • Microsoft battle testing each piece of new technology released
    • Bing is running on top of the the Windows Server 2012 RC
      • You can’t “head fake” this type scale

Next up was Jason Zander and he was here to speak about some of the advancements in the Windows Server and System Center product space.  Jason added that datacenters, no matter on-premise or cloud, are required to be more responsive to ever changing business needs.  Much like everything, it seems, these days people want services cheaper, faster and delivered yesterday.  The advancements in Windows Server and System Center have been created to address these needs(well except the yesterday bit).

The Modern Datacenter needs to be:

  • Scalable and elastic
  • Always up, always on
  • Shared Resources
  • Automated and self service

Windows Server 2012

  • Windows Server 2012 has been designed to handle “workloads that can’t be virtualized”
    • unprecedented hardware configurations
    • 64 node clusters and 4000 VMs in a single cluster
    • If you own a SAN, using Windows Server 2012 is a no brainer
      • 10gb file copied in 10 seconds using ODX
    • 80 000 customers downloaded Windows Server RC in first week
    • ~150 TAP customers

Mark Russinovich was up next.  He wanted to discuss some of the advances that the Azure team has made in the area of supporting durable Virtual Machines in Azure.  One of the more humorous moments was when he referred to a slide that had a Linux logo on it: “no we didn’t get hacked, we do support Linux”.

Provisioning Cloud infrastructure

  • New Metro interfaces in portal
  • VMs now supported includes
    • OpenLogic
    • SUSE
    • Ubunto
    • Windows Server 2008 R2
    • Windows Server 2012 RC
  • We can deploy a new VM into a segregated “corporate network” in Azure (I assume this means what previously was known as Brooklyn")

The CIO from Aflac, an insurance company, who has deployed a SharePoint solution to Azure was brought on stage to share his experiences with Azure.  They have built a solution by provisioning VMs into Azure.  Aflac chose Azure based upon the agility and flexibility that it provides.

Cloud and Aflac (CIO)

Requirements

  • Need Agility
  • Flexibility
    • Aflac may run promotions or have peak usage periods based upon customer policy expiration dates
  • Performance
    • They have many different locals and support offices in many different regions.  They need global scale to reduce latency
      • Japan is a heavy user of their services

Solution

  • SharePoint 2010
    • Built on VMs hosted on Azure
    • One use case is to schedule customer appointments  with a Customer Service representative
      • Capture this information in Azure and then bridge it back to On Premise using VPN (Azure Connect)
  • 12 VMs support solution including
    • SharePoint App Servers
    • SharePoint Web front ends
    • SQL Server cluster
    • AD cluster
    • SCOM
  • Use availability sets to further mitigate failure points
  • Looking to use Azure for extranet and customer scenarios

The Keynote then shifted towards the Developer experience including new Azure and Visual Studio 2012 functionality.

Inside the modern Application

  • Personal App experiences are now mandatory
  • Social is something that needs to be built into application and can’t be a “bolt” on
  • Build, Measure, Learn, Move

New Tools:

  • Visual Studio 2012

Frameworks

  • .Net 4.5

Comprehensive Runtime

  • Windows Server
  • Azure

 

ASP.NET MVC4/VS 2012 RC

    • Built in support for Mobile and Web Applications
    • pluggable emulators

HTML is now supported for LightSwitch

  • You can create a Mobile Client using HTML5/Jquery/Javascript/CSS)

Keynote Summary

Having the DJ out there at the beginning was a nice touch.  I am sure they wanted to pump up the crowd and generate some excitement.  I think they hit the mark here but slowly the energy started to drain.  There were several demos that didn’t work or required a second take for them to run.  This is a bit of a let down as an attendee.  Sure “stuff happens” but it seems like there could have been more rehearsing happening before hand.

I also felt Microsoft missed an opportunity to talk about Windows 8 and consumer devices.  Arguably this is not the right time or place to do this but everyone in the building uses a computer on a day to day basis as part of their job.  I would have loved to seen some demos of some slick new tablets or ultra books running the latest offerings.  Ironically I heard more people talking about the newly released MacBook Pro today than Windows 8 devices.  I see this as suitable evidence that they missed the mark considering the “pro” Microsoft audience at this conference.

Windows Azure Today and Tomorrow (Scott Guthrie)

If you have never seen Scott speak…you are missing out.  He has this uncanny ability to take an awkward situation and make people laugh.  He did this at Summit with his “Bob” Azure site and once again today with his “Dude” Azure site.  He is very engaging and enjoys ‘pop-star’ status amongst the Microsoft loyalists.

In his presentation, he elaborated on some of the recent news that he shared at the Meet Azure event.  More specifically he focused on the new durable VM support, Azure Websites and ServiceBus.  Surprisingly he even included “BizTalk” a couple times and  he did not include the words “death, dead, soon to be dead or shot dead” in the same sentence.

General Update

  • Azure is flexible, open and solid
  • Microsoft has opened their minds to new platforms and open source
    • Linux hosted on Azure
    • SDKs being opened and hosted on GitHub
  • 99.95% monthly SLA
  • Pay only for what you use
    • Scale your resources as you grow

MSDN

Scott then reminded the attendees with MSDN that it includes Azure benefits

  • Benefits are based upon type of MSDN account (Pro/Premium/Ultimate)
  • Free trial
    • If you use MSDN your hours will be credited

Durable VMs

  • Create a new VM in seconds/minutes
  • Users have Full Admin on the provisioned server via RDP
  • Can install your own applications on the durable VMs including SQL, SharePoint and BizTalk (check out Steff-Jan’s BizTalk post here)
    • Storage is replicated to 3 locations
      • Seamless failover
    • Async  backup to another datacenter at least 500 miles away
      • Don’t have to do this if you don’t want to
  • Websites
    • Build with ASP.Net,  Node.js or PHP
    • Deploy in seconds with FTP, GIT or TFS
    • Start for free, scale up as your traffic grows
  • Shared Mode (Free)
    • 10 websites for free
    • other tenants co-hosted
  • Reserve Mode (Pay as you go)
    • Dedicated VMs
    • No other tenants
  • Charge for VMs on a per hour basis
  • Converting existing applications to cloud is easy
    • Right mouse click and select “Add Cloud Services Project”
    • Another Azure package will be added to solution
    • Create new roles
      • backend
      • front end
  • VMs are always being monitored and if a failure does exist, your application will be migrated to a new VM to ensure of business continuity
  • You have the granularity to spin up or down a particular worker role.
  • You can RDP into a Role instance as well
  • Cloud allows you to focus on apps and not infrastructure
  • Azure is great for the following scenarios:
    • Burst/bust scenarios
    • Seasonal events (tax, Christmas, Thanksgiving)
    • Dev/Test
    • Sales/Promotions
  • Many SDKs are supported
    • .net
    • java
    • python
    • php
    • node.js
  • SQL Database
    • Relational SQL Server Engine in the cloud
    • Clustered for HA
    • Fully managed service
      • backups
    • SQL Reporting support
    • Provisioned in seconds
    • Can scale to 150 gb in seconds
    • Can be accessed from ADO.Net
      • Can be accessed from on-premise/cloud, websites etc.
  • BLOB storage
    • HA, scalable and secure file system
    • Blobs can be exposed publically over http
    • Continuous geo-replication
  • Distributed Cache
    • low latency, in memory distributed cache
    • Dynamically grow and shrink cache size
    • ha support
    • memcached protocol support
    • Twitter demo went from 1.6 seconds to retrieve tweets from twitter down to .29 ms
  • ServiceBus
    • Secure messaging and relay capabilities
    • Easily build hybrid apps

Windows Azure Today and Tomorrow Summary

Overall it was a good session.  Like I mentioned before, Scott is a great speaker and I enjoyed listening to him.  Some of the content he provided I have seen at Summit, but I certainly can’t hold that against him.  I think this was a great introduction to Azure for people that have not seen it before or for those who took a look a few years back and are now interested in learning more about it.

Microsoft TechEd North America–Day 2

$
0
0

 

Day 2 Keynote

Day 2 opened with another Keynote this time led by Antoine Leblond.  It did address some of my concerns with Day 1’s keynote.  This time the focus was on Windows 8 and they did demonstrate the OS running on a few different devices.  The demos were nice and relevant, however they did not demonstrate it on new hardware.  They used the Samsung Series 7 slate with the Samsung Series 9 laptop.  They also demonstrated Windows 8 on a Lenovo laptop(sorry nothing cool about a Lenovo).  If they wanted to generate some excitement, and didn’t want to pull out a shiny new beta Samsung,  they should have just bought a new MacBook Pro and run Windows 8 on it.  I think Microsoft missed another big opportunity but I digress.

Line of Business Apps

They did demonstrate some good looking Metro applications on the devices.  They did provide a quick overview of the Beer Ranger app that I mentioned in a previous post.  They also demonstrated a native SAP application that is part of their Sales Automation Pipeline lineup.  The application was a nice looking app, but much like many other SAP applications, I had no idea what it was suppose to do (opps did I say that out loud).

Trackpad

My primary personal (non-work) machine is a MacBook Air that runs Windows 7.  Many people ask why I do this and how I am missing out on all of these touch gestures.  For me I love my Microsoft Arc Touch mouse and quite frankly could care less about the gestures that you can use on a trackpad.  But for those of you who like using gestures on a trackpad there is some good news for you.  Windows 8 will support “Apple like” gestures including Semantic zoom and access to the new “App bar” to name a few.

 

Developing for Windows 8

Antoine provided us with a rundown of what is involved in building applications for Windows 8.  Here are some of the highlights.

  • Windows Runtime (WinRT)
    • New API set that allows you to build apps and games
    • Supports touch, keyboard and mouse
    • Support for “contracts” so that apps can leverage OS functions like “share”.  Much like you can Share “data’  on a Windows Phone 7, you will be able to perform a similar function with your own application but hooking into these contracts.
  • Tool Support
    • new release of Visual Studio (2012)
    • C++, C#, JavaScript, css, html are all supported technologies
  • Language and platform support for inline async calls. 
    • I really like this feature.  I was never a big fan of all of the delegate spaghetti code a person previously had to write to support async methods.  In a previous post, I had to write async REST methods to support calling REST based services from a Windows Phone 7 app.  I expect it to get much  simpler now.
  • Visual Studio Simulators
    • Can simulate different types of hardware(slates, desktops, large displays, small displays etc)
    • Rotate screen
    • Higher/lower resolutions
    • Touch gestures

ARM Support

This was interesting to see as I have heard a little bit about ARM support but just have not seen it in action. 

Some of the benefits of leveraging ARM based devices include:

  • Low power consumption
  • Long Battery life
  • Trusted boot
    • Validate all code in boot path before it runs
    • Device encryption is always on
  • App model
    • Geared at making apps that don’t alter the state of the machine (Security benefit)
  • Can use same Management infrastructure to manage devices
  • Metro apps work on WinRT as well
    • An RDP client does exist so that you can log into other Windows PCs/Servers
    • Apps must be signed by known trusted authority or have appropriate cert
    • Apps must must have been run through “WAC” approval application
      • used to honor design principals about not altering the state of the machine
  • Key office applications are available

Essential Tips for the Windows Azure Startup

This was a really interesting session.  Michele Leroux Bustamante is well known in the WCF and MVP community as a person with deep technical skills.  I have seen her speak before at a previous TechEd so I thought this would be a good session to attend.  Something that I appreciate about Michele’s presentation style is that she remains composed through out the entire presentation.  Even when she runs into some issues, such as a demo not quite working out, she is able to recover with a tremendous amount of poise. I believe she is Canadian, maybe this has something to do with it.  Smile

This time around she was giving guidance on developing a Startup based upon Windows Azure.  It was a very enlightening session and it was quite evident that she “gets it”.  She has acted as a consultant to many start-ups and provided the following tips when building applications for Start up companies.  Something to keep in mind is that these principles, while applicable to Startups, also just good practices to follow even if you are a well established brick and mortar company.

  • Startups need to show some traction early on
  • Go fast, maintain quality
  • Monitor status, analytics and adjust accordingly
  • watch for conversion rates
    • do visitors create accounts

10 Essential Tips

Here they are as they were presented to the audience:

1. Design for Role Scale out

    • Needs to happen up front.  By the time you need to scale it will probably be too late or more difficult to
    • Need to design for scale, may want to segregate or isolate controllers to allow to further scale out functions that may be more popular or have more access patterns (Mobile, API)
    • Domain Mapping
      • Create a CNAME or A Record for the IP address of your production deployment

2. Use an SMTP relay service

    • Most applications require some form of email communication
      • Can use System.Net.Mail.SMTPClient
    • Write email “messages” to a queue and then dequeue and send
    • Need to use a relay service so that your “From Email” address does not get blocked/spam
    • smtp4Dev is a great tool for use in Development
    • authsmtp is a production ready email relay service that may be beneficial

3. Configuration Profiles

    • Avoid web.config for
      • settings that vary between staging, production
      • settings that require experimentation for performance
      • settings that support diagnostics and test
    • Use the Azure Service config files instead
      • ServiceConfig.Local.cscfg
      • ServiceConfig.Cloud.cscfg

4. Don’t forget to Cache

    • You don’t realize how much latency that accessing frequent data creates
      • Co-locate Cache with across roles
      • Together produce distributed cache total
      • Any role can access
    • Be careful, Cache is not durable, may not live forever
    • Use for optimization
    • Performance increases are phenomenal

5. Watch your Queuing costs

    • Costs may escalate due to the amount of polling
    • If you are polling, you are paying
    • Understand the differences between Service Bus and Azure Storage Queues 
      • Message lifetime
      • Max message size
      • Max total storage
      • Duplicate detection
      • Order guarantees
      • Dead letter queue
      • Storage metrics
      • Purge capability
      • Long polling/manual back-off polling
    • Initial decisions are about cost and agility
      • consider Storage Queues due to back-off polling

6. Collect Diagnostics

    • When writing new project, there is a lot of hacking going on because you are trying to be fast
    • Difference between getting it done and getting it done properly
    • Create a diagnostic helper and establish patterns

7. Monitor from outside

    • Azure Ping free monitoring tools
      • Sends SMS or email when monitoring
        • Storage
        • SQL
        • Queues
        • Is site running?
    • Azure Watch
      • Monitoring and alerts

8. Don’t drink the no-SQL Kool-Aid

    • VCs love “NoSQL”
      • Can be pressure from VCs to use it
      • VCs think it is cheaper to manage because you don’t need a DBA
        • Asking for trouble if you don’t understand your relational data model
    • Need people who understand SQL to look into NOSQL and report back to the group on what the pitfalls are
    • Go to NoSQL for obvious stuff
      • search indexes
      • GEO data
      • profile data (coming from social media)
    • Keep core competencies in RDBMS
    • Then reach out to NoSQL  experts to help bridge the two worlds

9. Enable Social Logins and Simplify Sign Up

    • You want conversion rates – make it dead simple then!
      • ACS facilitates this – simple to use
        • Dirt cheap per transaction
    • The more you ask from a user to register, the less likely they are to sign up
    • Keep it simple you will get conversions
      • Pinterest – only email address to sign up?

10. Estimate your costs

    • Layers of cost
      • Storage
      • Storage Transactions
      • Bandwidth (# 1 thing if you have  a lot of media)
      • Cache
      • ServiceBus
      • SQL Azure
      • Bandwidth
    • Need to run estimates
      • scenario based
    • BizSpark may offer some cost savings for new startups

Service Bus Overview – Clemens Vasters and Abhishek Lal

As usual Clemens put on a good show. This time Abhishek joined him in this presentation and provided some solid demos.  The first half of the presentation was largely a review for me as Service Bus is an area that I try to stay up to speed on.  The second half of the presentation introduced some new tooling and features as part of the June 2012 release.  Selfishly, I don’t want to go into too many details here as I would like to actually play with some of these features and then provide a more complete blog post(s) in order to provide these subjects with some additional context.   Stay tuned!


Microsoft TechEd North America 2012-Day 3

$
0
0

 

An Overview of Managing Applications, Services, and Virtual Machines in Windows Azure - Karandeep Anand

In this session Karandeep walked us through the new portal. The new portal does not include Service Bus endpoints like queues or topics and also does not include caching… yet.  I am told that Microsoft is hoping to have this functionality in the portal by end of the year.  However, they have enabled single sign on so you should be able to toggle back and forth between the old and new portals quite easily.

Within the portal we can get the state of:

  • Virtual Machine
  • Websites
  • Cloud Services (Web roles/Worker roles etc)
  • SQL Databases
  • Storage
  • Networks

Below I have taken a screenshot of the Azure portal.  I only have one SQL Database in the production portal but you can get a sense of the new look and feel.

image

You also have the ability to add new assets by clicking on the “+ New” link.

image

You can then specify the type of service that you would like to provision.  In the case of Virtual Machines, you need to sign up for the preview before you can actually provision a VM.

image

 

Scripting Management Support

Using the VM portal is not the only way to manipulate services in Azure.  There is first class support for scripting in Windows, MAC and Linux.  In the case of Windows admins will find comfort knowing that there is first class support for PowerShell.  These PowerShell command lets will take advantage of the same underlying REST APIs that the portal is using

 

High Availability and Service Level Agreements

Having a third party, such as Microsoft, host IT services for your organization may create some concerns within your organization.  What if your services go down?  What “skin” does Microsoft have in the game.  To put it bluntly, they have some skin in the game.  Perhaps not as much as some would like but Microsoft will be reimbursing organizations for their usage should they fail to live up to their commitments.

Here is a, very, loose break down of Microsoft’s SLA policy:

  • 99.95 uptime – monthly SLA
  • 4.38 hours of downtime per year for multiple role instances
  • 99.9 for single role instances
    • 8.75 hours per year
  • What’s included?
    • Compute Hardware failure (disk, CPU, memory)
  • Datacenter failures – network failure, power failure
  • Hardware upgrades, software maintenance – HOST OS updates
    • Planned downtime – 6 day notice, 6 hour window, 25 minute downtime

What is not included?

  • VM container crashes, Guest OS failures

Monitoring and Auto-Scaling applications

Now this was cool!  A company called AppDynamics demonstrated their monitoring solution for Azure.  Some of the features included:

  • Application performance management dashboard.  This included a graphical representation of your distributed solution and provided the latency that exists between each component.
  • You also had the ability to interrogate the stack level trace to get very granular
  • The tool also supported the ability to auto scale your application based upon different criteria sets including
    • CPU
    • Message through-put
    • errors
    • specific business hours
    • critical conditions

Since Azure supports a “Pay as you go” model I found this tool to be extremely intriguing.  Not only did it look nice, but it provides functionality can can allow you to reduce costs when your app is not very busy and also auto scale to ensure of a good user experience when your site is busy.  To read more about this company and their product for Azure, please read the following press release.

 

Building HTTP Services with ASP.Net Web API – Daniel Roth

The other session that I wanted to talk about was the Building HTTP Services with ASP.Net Web API.  For the past couple months I have been playing with MVC3, JQuery and AJAX so this session was rather timely.

What is Web API?

  • An Http Service
  • Designed for broad reach
    • browsers
    • phones
    • devices
    • set top boxes
  • Uses HTTP as an application protocol, not a transport protocol.  So what this really means is it takes advantage of existing verbs GET, POST, PUT, DELETE

Why build Web APIs?

  • Reach More Clients
  • Scale with Cloud
  • Embrace HTTP – simplify things
    • use existing verbs
  • Web API Requirements
    • Need a first class HTTP Programming model
    • Easily map resources to URIs and implement the uniform interface
    • Rich Support for formats and HTTP Content negotiation
    • separate out cross cutting concerns
    • Light weight
  • ASP.NET WebAPI is the end result

WebAPI description

If you like self documenting APIs, then WebAPI has some built in features to support this type of functionality.

  • Use the IApiExplorer Service to expose “contract”
  • Provides runtime description of WEB API
  • Renders content in a useful way
  • Shows request and response formats
    • XML, JSON, url-encoded

Hosting

  • Many options – self host(console), IIS, Azure roles, other web servers
  • MSDN code gallery and NuGet Code packages are available

Other

  • ASP.NET Web API is available as part of MVC4
  • Is part of the recent open source movement that Microsoft has been involved in
  • Product team accepts 3rd party contributions
  • Unprecedented transparency
    • When Microsoft  devs check in code, you have access to code through  GIT repository
  • Asp.net mvc4 and web api is included in Visual Studio 2012 RC
  • WebAPI is now a Visual Studio project template
    • Can also create a unit test project
  • New  MVC like map route for WebAPI
    • api/{controller}/{id}
  • JSON, XML and form-url-encoded supported out of the box for HTTP Request
    • JSON and XML natively supported for HTTP Response
  • Validation is run on the data from every request
    • Check ModelState.IsValid to see if you have a valid request
  • Support for ODATA queries
    • return IQueryable<type> Get()
    • decorate method with QueryAble

Conclusion

I realize that the release of this technology has been highly contested.  There have been people using the WCF stack that are now in a tough spot to migrate away from this technology to WebAPI.  For me, as someone new to this space I really liked how you organize your project and have clean separation from controller to controller.  You can quickly expose services without the need for heavy WSDL type contracts.

I also like how most of Daniel’s presentation was run from Fiddler.  Like he mentioned several times, WebAPI at its root is really just HTTP.  So what better tool to craft requests than Fiddler.

In closing, I do a lot of System Integration.  Primarily with BizTalk and must admit, I like contract based development where you are defining a firm contract upfront.  I have never been a big fan of loosely based lightweight services as things can quickly to to hell when doing this type of stuff for EAI.  However, I have woken up and seen the light.  I really do feel there are good use cases for this type of technology for light weight application based services.  I don’t necessarily think that this technology is a great fit for EAI, but for applications that may be surfaced using a variety of clients (mobile, web) I think this is a great way to expose back end services to front end client.

 

Stay tuned for Day 4 as I expect to have some encouraging BizTalk news to report!

Application Integration Futures – The Road Map and what's next on Windows Azure

$
0
0

This presentation has hosted by Bala Sriram and Rajesh Ramamirtham

 

Blog Update: I have added a conclusion where I have provided a summary and some of my thoughts on this release.  Also, this session has now been posted so feel free to take a look at it here.  You can also add any comments that you have regarding this session at the bottom of this post.

Key takeaway from Bala: We are innovating in BizTalk!

General Update

  • BizTalk Server “R2” release will be available around 6 months after Windows 8
  • CTP expect this summer
  • Commitment to releasing server for years to come. Publicly indicating there will be at least another release beyond “R2”
  • 12k+ BizTalk customers
  • 81% of Fortune Global 100 use BizTalk
  • 79% of customers are using BizTalk 2010
  • CU delivered every quarter with product enhancements
  • Best NSAT in the industry
  • 6 of 8 largest US Pharmaceutical Companies use BizTalk
  • Continue to bet on BizTalk – We will take your investments forward!
  • Enabling new Azure based BizTalk scenarios for EAI & EDI
    • Bringing together BizTalk on-premises and in Azure

What customers are telling us?

    • Keep me current with platform, standards and LOB changes
    • Reduce time and cost of developing of Integration solutions
    • Let me focus on business challenges, not technology infrastructure
    • Cloud advantages
      • Cost-effective, scalable infrastructure for easy deployment
      • Some scenarios like b2b are amenable to cloud
    • Cloud Challenges
      • Data privacy, isolation , control more integration
      • LOB assets will continue to be on-premise
    • Phased cloud adoption on my terms
      • One size does not fit all

How BizTalk will meet these requirements?

  • Upgrade to latest MS platform
  • Improved reach for B2B customers
  • Better performance and manageability
  • BizTalk on Azure IaaS
    • Eliminate HW procurement lead times
    • Reduce time and cost to setup and maintain BizTalk environments
  • BizTalk on Azure PaaS for EAI and EDI
    • Reduce partner onboarding and management cost
    • Leverage existing BizTalk artifacts
    • Rapid configuration-driven development for common integration patterns
  • All of these working together seamlessly as one BizTalk
    • Trying to work under “one umbrella” but no naming can be implied at this time

BizTalk Server On-Premise Update

    • Platform Update
      • Support for:
        • VS 2012,
        • Window 8 Server
        • SQL Server 2012
        • Office 15
        • System Center 2012
    • B2B enhancements:
      • EDI
      • HL7 2.5.1, 2.6
      • SWIFT 2012 Message Pack
    • Better Performance
      • In Order Delivery process
        • Serialization created delays
      • Improved dynamic send ports and ESB via host handler association of Send ports
        • Can configure a dynamic send port host handler in Admin Console
      • MLLP adapter performance
      • HIS DB2 client transaction load balancing, client bulk insert (15 times faster)
    • Better manageability
      • Visualize BizTalk artifact dependencies in BizTalk admin console
      • ESB toolkit as core part of BizTalk setup and product
      • HIS Administration using Config files with application metadata stored in XML
    • Improved connectivity
      • Consume REST Services directly in BizTalk
        • WebHttpBinding will be used when calling REST Services
        • ACS support
      • Simplified SharePoint integration experience
        • No more adapter web service installs on SharePoint
      • Improvements to existing adapters (HIS, SMTP)
        • improved macros
      • Easy connectivity to Service Bus Relay, Queues and Topics
      • CICS http client connectivity to Windows

BizTalk running in Azure (IaaS)

  • Use case :

    • First step in the cloud adoption
      • Eliminate hardware procurement lead times
      • Reduce time and cost to setup and maintain BizTalk environments
      • Move applications from on premise and back
    • Create a virtual network in Azure and enable connectivity to on-premise network
      • User logs into Azure Portal
      • User creates a new VM and selects BizTalk stock image
      • User specifics BizTalk environment topology and adds them to existing virtual network
      • New VMs are provisioned for user in Azure IaaS
      • User logs into the provisioned VM which has BizTalk installed and configured and starts using it.
  • Targeting same Windows 8 timeframe
  • Microsoft will provide guidance on performance
  • MSDTC support in Azure?
    • It is supported now in IaaS and was brought in to support BizTalk
  • All features that work on premise will work in IaaS

Goals

  • Seamlessly connect with Azure artifacts
  • Enable hybrid applications that span Azure and on-premises
  • Expose LOB services both on Premise and to the cloud

Conclusion

Wow…that was a lot of content in a short 1:15 h session.  There was actually more information released related to EDI support in the EDI Services (PaaS) but I just couldn’t keep up between writing this blog and tweeting with the European BizTalk community.

What I liked:

  • REST support event if it is only Send
  • Cleaner integration with SharePoint.  A similar statement was made with BizTalk 2010 but talking with the product team members after the presentation I know that this is not lip service.  The Adapter Web Service is gone.  No more installs on SharePoint servers.  Also, no more consuming SharePoint’s legacy “Lists.asmx” web services.  Yay!
  • Ordered Delivery performance.  It will be nice to have some improved performance while maintaining sequential integrity.
  • First class ACS support in selected “cloud enabled” Adapters
  • BrokeredMessage property/BizTalk Context property support
  • BizTalk IaaS – should open new capabilities
  • I can see the symmetry between on-premises and PaaS starting to materialize

What I would love to see:

  • Exposing REST end points
  • Single Mapper/Transformation experience between On-Premises and PaaS offering
  • Support for other sterilizers than XML (JSON, C#)  - stay tuned?
  • Service Bus Connect – Receiving requests from LOB systems (SAP IDOCS)

 

Overall the tone was extremely encouraging.  Personally, I haven’t seen this much innovation come from the BizTalk team since BizTalk 2006 R2 when support WCF/WCF LOB adapters was introduced.  Yep..I said it.  The next release of BizTalk is no longer “just a platform update” .  In my opinion, this is a full release and should be named accordingly.  For those that think BizTalk is dead – better think again.  The operation was successful and the patient is still alive.

Building Integration Solutions Using Microsoft BizTalk On-Premises and on Windows Azure - Javed Sikander and Rajesh Ramamirtham

$
0
0

Update:  This session has now been posted to Channel 9 and you can view the video here.  Feel free to post any comments at the bottom of this post.

 

This was a follow up session to the Application Integration Futures – The Road Map and what's next on Windows Azure  session that was discussed here.  The primary focus of this session was to demonstrate some of the new capabilities of BizTalk On-Premises, BizTalk IaaS and BizTalk PaaS. 

During the presentation there were many questions as to what the differences between the On-Premises version and the IaaS version would exist.  After many questions about a particular feature (BAM, ESB Portal etc) Bala  stepped in and declared that all features that exist in the On-Premises version will exist in the IaaS version.  After a further discussion after the session, it looks like there is a little more work to do in the area of clustered host instances but otherwise we can expect symmetry between these two versions.

Since BizTalk Next (aka “R2”) will be released as part of the latest Microsoft platform offering (Windows Server, SQL Server, Visual Studio), all BizTalk projects will target the .Net 4.5 platform.

The primary purpose of this session was to demonstrate some of these new features lets get into some of the scenarios/demos that were discussed.

BizTalk Consuming REST services

In the first example, the team demonstrated BizTalk consuming a REST feed from the Azure Data Market.  Specifically, the feed was related to flight delays.  BizTalk connected using the new WCF-WebHttpBinding and performed a GET operation against this particular feed.  Since the foundation for authentication when communicating with Azure is the Access Control Service (ACS), Rajesh demonstrated the out of box ACS authentication configuration.

BizTalk consuming SalesForce.com over REST API

Once again BizTalk was configured to consume a REST service.  In this case it was a SalesForce customer feed.  Within the Send Port, the “SOAP Action Header” was populated and once again included the GET operation.  A custom transport behavior was used to provide the appropriate credentials. When executed, a list of customers was returned from SalesForce.

Next, the URI in the SOAP Action header was modified and a hard coded id was provided for a particular customer.  In this case only this particular customer was returned.  Both myself and Bill Chestnut were thinking “great, but how do we inject a dynamic customer id to this GET request”?  Once again the BizTalk team had an answer for this and it came in the form of a new Variable Mapping button.  When clicked an interface that will allow us to specify the name of a context/promoted property.  Bottom line is that we can drive this dynamic value from message payload or context.

Finally, the last SalesForce demo included a POST, where they were able how to demonstrate how to update a customer record in SalesForce.com. 

 

BizTalk PaaS: Azure EAI Services

The team then switched gears and started talking about BizTalk PaaS: Azure EAI Services.  I have no idea as to whether this will be the official name.  This is what the title of their slide included so I am using it here.  I do like it.  I do like that BizTalk is still associated with this type of functionality.  I must caution that the product team did indicate not to look too much into naming/branding at this point.

Some of the functionality(some new, some old) that we can expect in the PaaS solution includes:

  • Sequence of activities to perform impedance mismatch
  • Flat file disassembly
  • Message validation
  • Transforms
  • Content based routing
    • XPath, FTP properties, Lookup (against SQL Azure), Http properties, SOAP
  • Hosting custom code
  • Scripting functoid to host .Net Code
  • XSLT support
  • New Service Bus Connect Wizard
  • BizTalk connectivity to Azure Artifacts (Service Bus Queues, Topics, XML bridges)

EDI Portal

  • Metro UI for managing trading partners
  • Manage and monitor AS2, X12 agreements
  • View resources like Transforms, Schemas, Certificates

EDI Bridge

  • Archiving
  • Batching
  • Tracking

Other

  • IaaS will be a public TAP
  • Other BizTalk releases(On-Premises/PaaS) will be “regular” TAP
  • On a lighter side, I did ask if we can expect a Metro version of the BizTalk Admin Console.  Don’t expect it any time soon Smile.  Basically any new UIs that need to be created will follow the Metro styling but other than that don’t expect many updates to previous tools.

Conclusion

This was a great session that included many demos and really proved that what the Product team was speaking to in the previous session wasn’t just lip service.  Having been at the MVP Summit, I must say I was pleasantly surprised at the amount of functionality that they have been working on.  Once again, I love the direction that they are heading in.  It has an updated feature set that should please customers no matter what their ideal deployment model is (On-Premises, IaaS, or PaaS).  You can also tell that they are serious about symmetry although it may take a while for PaaS to be closer aligned to On-Premises/IaaS but I think they are headed in the right direction.

Microsoft TechEd North America 2012-Day 4

$
0
0

So this post is a little delayed due to all of the excitement around the BizTalk sessions.  However the sessions were that good that I wanted to still publish the post.

Azure Service Bus Solution Patterns  -Clemens Vasters and Abhishek Lal

Another session by Clemens and Abhishek.  This time around it was a very practical session based upon some Customer Use Cases and how to implement some popular integration design patterns based upon the “Integration Bible” - Enterprise Integration Patterns.  To view the actual session on Channel9, click here.

Some of the Use Cases included:

  • Web Services For Retailers
    • Company from Italy
    • Provide SAAS solution for Retail Stores
    • Seed local retail outlets with Catalogue and Pricing information
    • Push out to retail stores
      • Use Topics to distribute information to each retail store
  • SaaS with Dynamic Compute Workload
    • High Performance Computing (HPC) scenario
    • Command and Control messages sent in from Service Bus
    • ISV specialized dynamic compute capacity provider
  • Consumer Web Site
    • Web site that searches for data about people – credit check, criminal check etc.
    • Their challenge was back end data co-ordination
    • Different profiles for users who have different access to to back end services
    • Queues for decoupling the web layer from middle-tier services

 

Scaling things out

Next Clemens walked us through a scenario that Microsoft has been working on with a particular customer.  The solution was related to remote controlling air conditioners.  The idea is that a consumer would have the ability to manually control it but also power providers could *potentially* control it to prevent rolling brown-outs from occurring.  Instead of instituting  wide spread rolling brown-outs, each customer could alter their consumption. Collectively these savings add up and prevent demand from exceeding supply.  I am a little skeptical about a power company(I work for one) controlling someone’s air conditioner but in theory it makes a lot of sense.

The requirements for this solution includes:

  • Pair devices, such as air conditioner, to local Wi-Fi connection
  • Users need the ability control the device
    • Control requests could be made from back yard or across the world
    • Service Bus makes these control requests possible from anywhere that has an internet connection.
  • Devices will then send consumption data to Azure where the data can be viewed on a mobile device. This data will make its way to Azure via Service Bus.  The premise behind this is if customers are more aware of their consumption patterns, then they may try to alter them.  This is something that my organization has also been investigating.

So a question remains, these types of consumer devices will not have the .Net Service Bus bindings installed so how will they actually communicate?  The answer is really HTTP.  You can send HTTP requests to the Service Bus and in this case Clemens introduced a concept that he likes to call “N-HTTP”.  It is a similar to the “NoSQL” movement but in this case is related to HTTP.  HTTP in many cases includes HTTP Headers but also an entity body.  The entity body could include JSON content, XML content etc.  The challenge with entity bodies is that you need a parser to package the information up in requests or un-package it when receiving responses.  This would further complicate things as these parsers would need to be loaded into these consumer devices.  What’s interesting is HTTP Headers is they are well understood, across devices, systems, technology stacks etc,  and do not require parsers.  So if you can get away with sending key/value pairs when sending or receiving messages then this solution should work for you.

Receiving messages from Service Bus generally includes using ‘long polling’ when waiting for messages.  Using  long polling sockets isn’t a great use of power resources for devices that do not have permanent power sources (devices that rely on batteries).  With this in mind, Microsoft has been working with other industry leaders on the AMQP (Advanced Message Queuing Protocol).  AMQP is a popular queuing technology that is used in financial brokerage settings.  Another benefit of using AMQP is that it has a quieter socket profile which results in lower battery consumption.  So this is an area that Microsoft is investing in that will have wide spread benefits….Cool Stuff!!!

 

Message Channel Patterns

Abhishek was back on point and walked us through some popular messaging patterns including:

  • Pub-Sub
    • Accomplished via Topics
  • Content Based Router
    • Using Topics based upon a Subscription Rule
  • Recipient List
    • Sender wants to send the message to a list of recipients
    • Common use-cases
      • Order processing systems – route to specific vendors/departments
      • “Address LIKE ‘%First%’
  • Message Routing
    • Session re-sequencer – receiving messages out of order and then using the defer method to postpone processing the next message until you receive the next message that is “in order”

I must admit, when I learn more about the Service Bus I do get a little giddy.  I just see it as such an enabling technology.  It facilities building applications that just wouldn’t be possible or cost prohibitive in years gone by.  Whether it is submitting SAP  timesheets remotely or reporting customer power outages it is an amazing technology the opportunities are endless when it comes to bridging data center boundaries.

 

Mobile + Cloud: Building Mobile Applications with Windows Azure– Wade Wagner

Wade Wagner, a former Microsoft Azure Evangelist, put together a pretty interesting session related to Windows Phone and Azure.  To watch this session on Channel 9 click here.  In the past I have followed some of the work he did with the Mobile toolkits for the different mobile platforms, but just haven’t had the time to take a closer look.

This session focused primarily on Windows Phone 7 and how it interacts with some of the Azure services (Storage, SQL Azure, Tables, ACS).  Personally, I think these technologies complement each other very well.  Especially in the area of bridging mobile devices with on-premise LOB solutions and leveraging the Access Control Service (ACS) for authentication.

Three reasons for Device + Cloud

  • Allows for new application scenarios
  • The cloud levels the playing field
  • The cloud provides a way to reach across device platforms and a larger pool of resources from which to pull

Why Azure?

  • PaaS you build it, Windows Azure runs it
  • Automatic O/S patching
  • Elasticity and Scale
  • Utility Billing
  • Higher-level services
  • ACS, Caching, CDN (cache static content), Traffic Manager (route traffic across Azure datacenters based on locale)

Wade then demonstrated a scenario really lends itself well to this technology.  A mobile application that will take advantage of Social Identity providers(Windows Live, Google, Yahoo) for authentication via the Access Control Service.  Wade demonstrated that this isn’t as complicated as it sounds.  With the help of a Nuget package and adding a STS reference we can get this working in the matter of a minutes.  Wade then added some additional functionality to consume a ASP.Net Web API.  Most presenters would have left their demo there.  Giving people the information to build the services but then leaving out some “real world” gaps around security.  Wade did take his demo one step further and then showed we can use the ACS service to authorize user requests as well.  Before the ASP.Net WebAPI method is called, we can intercept this request and validate that the token that has been included as part of the HTTP Request is a valid ACS token.  Provide the token is valid, the appropriate data will be returned.

Wade then wrapped up his session demonstrating how we can use the Azure Push Notification service to serve up “toast notifications”.  Another set of useful information that I hope to play with soon. 

If you are into mobile apps, you definitely owe it to you to watch this session so you can learn about all of the Azure services that you and your customers can benefit from.

Exposing common service(s) to SAP and WCF clients

$
0
0

I have a scenario I am dealing with at work that involves exposing some common data to two different systems: SAP and a Custom ASP.Net Web App.  Both of these applications will request BizTalk to fetch some data from a variety of database views, aggregate it and package it up nicely for these calling systems.  Both Systems will be requesting this information on demand – i.e. Synchronously.  SAP will be calling an RFC hosted in BizTalk,via the SAP Adapter, using the method that I identified in a previous post.  The Custom Web Application will be consuming a WCF Service hosted in IIS.

Conceptually, my solution looks like this:

image

Whenever you have multiple systems trying to consume the same data, you generally  try to utilize a Canonical schema approach.  Canonical schemas allow you to take the different external data formats and transform them into a common internal format before handing them off for processing like in an Orchestration. 

image

You then perform all of your processing using this internal format to reduce the amount of code/configuration that you require to solve the problem.  Additionally, when you need to make a change, you do so in one place as opposed to two(or multiple) locations.

In order to keep things simple for this POC, I decided to reuse my RFC Add solution where you can have a client pass two numbers to BizTalk, BizTalk will then sum them and provide the answer back to the calling application.

image

For the Web Client, I will simply expose custom “Web” Schemas as WCF Services using the BizTalk wizard provided within Visual Studio.  Note that I did not want to expose my SAP schemas to my Web Application.  I could have done that but it is not a good practice as any changes to the SAP schemas would impose a change on my Web Application whether it was required or not.  Also, SAP schemas tend to be complex and we don’t want to unnecessarily propagate that complexities onto other applications if we don’t have to.

Initially I thought my solution would be pretty straight forward:

  • Generate my SAP Schemas
  • Create my Schemas that will be used for the Web Application and expose them via Wizard
  • Create my Canonical Schemas
  • Create related maps

I then created my logical port and set the Request and Response message types to my Canonical schemas. I deployed my application and configured my Physical Port within the BizTalk Admin Console.  I decided that I was going to re-use the port that was created as part of the BizTalk WCF Publishing Wizard.  I would simply add a Receive Location for SAP and set the appropriate inbound and outbound port mappings. 

Using inbound port mapping is very simple, I can specify multiple maps and BizTalk will detect which Map to use based upon the message type that is being passed.   So if we receive a request from SAP, BizTalk will detect this and use the SAP_to_Canonical.btm map.

 

image

It then hit me…how will BizTalk determine which Map to use on the Outbound (Response) message? The message being passed to the port will always be the same as it will be my canonical format.  I soon found out.  As you can see in the screenshot below, my SAP response was sent down to my Web Client(which in this case was the WCFTest tool).  Not the desired result that I was looking for.

image

While chatting with a colleague he mentioned why don’t you try a Direct Bound port.  I have used Direct Bound ports in the past but only in asynchronous scenarios.

So to fix this, I changed:

  • My logical Request-Response port to be a Direct bound port and to be Self Correlating.

image

  • Created an additional Receive Port.  I now have a Receive Port for my Web App and for SAP.

image

  • Made the appropriate Inbound and Outbound Port Mappings.  Now each port only has 1 Inbound and 1 Outbound port mapping.

image

 

  • My orchestration will no longer have a Physical Port to bind to since it will be Direct Bound to the MessageBox

image

  • Now when I execute my test from the WCF Test Client, I get the correct result in the WebAddResponse message type that I am expecting

image

  • I am also getting the correct response from SAP
image

Conclusion

The magic in this solution is really the Request-Response direct bound port.  The idea is that our Orchestration will place a subscription on our Canonical Request message.  It doesn’t really matter how that message gets into the MessageBox as long as it is there.  In this case we have exposed two end points, one for SAP and one for our Web App.  In both scenarios they will take their specific Request message and transform it into our Canonical message and therefore our Orchestration will pick it up.

Request-Response ports always use a form of Correlation so that it can provide the correct Response back to the calling client.  We can take advantage of this mechanism to ensure we get the correct Canonical Response message which in turn can use Outbound Port mapping and send our response in the correct format to the calling application.

Part 1: BizTalk + SignalR

$
0
0

For those unfamiliar with BizTalk, it is Microsoft’s premiere Enterprise Application Integration (EAI) platform.  It is used to integrate disparate systems over a variety of protocols using a durable pub-sub mechanism.

SignalR does have some similarities to BizTalk in that it is a messaging system that also supports the notion of pub-sub.  However, SignalR’s sweet spot is really lightweight messaging across Web clients.  SignalR itself is a scalable, asynchronous .Net library authored by David Fowler and Damian Edwards.  If you are new to SignalR, I recommend checking out this post by Scott Hanselman who describes many of the underlying technical details that I will not be going into in this post.

Why is SignalR important?

One of the true benefits of SignalR is it is Asynchronous by nature.  I don’t profess to be an expert web developer.  I have done some in an earlier life prior to my BizTalk days but I know enough to understand that locking up a user’s browser during a request-response interaction can be a really bad thing.  Yes, technologies have been introduced like AJAX and JQuery to provide a more asynchronous experience and they both have their strengths and weaknesses. But, overall they take many steps to solve this Request-Response locking problem.  The question remains, what happens when you have events occurring in other systems that you want raised within your current system that you are interacting with?  This is where I feel the true “magic” of SignalR comes into place.

Scenario

I work in the Electricity/Power industry and we are implementing an Outage Management System (OMS).  OMS systems are used to calculate or anticipate the particular device(s) that are the underlying problem that is causing a Power Outage.  OMS systems may have many different types of input including Customer Calls, IVR messages or even SCADA events.  In this case we are only going to focus on Customer Calls.

This OMS system is a commercial off the shelf (COTS) product that we have purchased from a vendor.  This product has defined, XML based, asynchronous interfaces that require the use of  WebSphere MQ queues.  Using BizTalk to integrate with the OMS system makes a lot of sense and plays well to BizTalk’s strengths that include:

  • Support for MQ Series
  • Durable Messaging
  • Tracking
  • Configuration
  • Correlation (Async messaging)
  • XML Schemas
  • etc..

But the question remains, we need to capture information that is coming from our Customer’s calls in our Call Centre.  One option that we are currently exploring is a light weight Web Based application that will allow our Call Centre to quickly capture customer’s outage information and then pass this information to BizTalk and have BizTalk deal with calling the OMS’s semi-complex interfaces.

Much earlier in my career I may have been tempted to do the following:

  • Expose a WCF/Web Service that a Web Application can consume
  • Accept the request from the Web App and then proceed to call the asynchronous interfaces that exist in the OMS system.
  • In the meantime, the Web Application that called the BizTalk Service is being blocked as BizTalk is still processing messages Asynchronously.
  • Once BizTalk is done interacting with the OMS system, BizTalk will provide a response back to the Calling Web Application.

The reality is that his is a bad pattern.  You don’t want to lock up your users in a Web Application if you don’t have to especially when you have asynchronous messaging happening in the backend.

image

An alternative approach, that I like much better, is outlined below:

  • Expose a WCF/Web Service that a Web Application can consume.
  • Once BizTalk has received the Web Request from the Web Application, simply provide an acknowledgement that the message has been received and will be processed.
  • At this point the Web Browser has been posted back.  If our Web Application is built around technologies like JQuery and/or AJAX our users can continue to perform some work.
  • In the meantime as BizTalk is calling the OMS' related interfaces, BizTalk can provide status updates back to the Web Application using SignalR.  There is actual information that the OMS system will pass back that our end users are interested in.  More specifically, it will include information as to when the Customer can expect their power to be restored (Estimated Time of Restore).  If you have ever experienced a power outage, I am sure you would like to know if it is going to last 30 minutes or 10 hours.

The benefits to this approach include:

  • User’s browser is not locked up
  • Users are continuing to be updated as to the status of their request
  • No need to continue to refresh your page (Just say NO to F5) in order to get a status update.

image

Conclusion

I am sure at the beginning of this post you were thinking what could BizTalk and SignalR possible have in common?  I hope that I have provided a good example of how these two technologies complement themselves.

In Part 2 of this series, I will actually implement this pattern that I have shown above.  I have split this information into two parts due to the total length of the content.  Stay tuned!

Part 2: BizTalk + SignalR

$
0
0

In my previous post, we discussed some of the reasons why BizTalk and SignalR may complement themselves in some situations.  I will now walk through the implementation of this OMS scenario.

I am going to create a Call Taker Web application that will communicate with a BizTalk exposed WCF service. Once BizTalk receives the request message, we will send a response acknowledgement message back to the Call Taker Web application. BizTalk will then communicate with OMS system.  “In real life” this will involve Websphere MQ, but for the purpose of this blog post I am simply going to use the FILE Adapter and a folder that will act as my queue.  Once we have finished communicating with OMS, we want to send an update status message to the Call Taker application using SignalR.  In this information we will include the Estimated Time of Restore(ETR) for the customer who has called in.image

 

The Bits

Other than a base BizTalk install, we are going to need the SignalR bits.  Like in most cases, NuGet if your friend.  However, as you probably know, BizTalk requires any “helper” assemblies to be in the GAC. We need to sign the SignalR.Client assembly with a Strong Name key.  To get around this I suggest you download the source from here.  You only need to do this for the SignalR.Client assembly.

The Solution

There are really 3 projects that make up this solution:

image

Let’s start with the BizTalk application since we are going to need to expose a WCF Service that the Web Application is going to consume.

In total we are going to need 4 schemas:

  • CallTakerRequest – This schema will be exposed to our Web Application as a WCF Service.  In this message we want to capture customer details.

image

  • CallTakerResponse – This will be our acknowledgement message that we will send back to the WCF client.  The purpose is to provide the Web Application with assurance that we have received the request message successfully and that we “promise” to process it

image

  • CreateCallRequest – This message will be sent to our OMS system.  Also note the msgid field which has a promoted property.  Since we are going to use correlation to tie the CreateCallRequest and CreateCallResponse messages together, we will use this field to bind the messages.

image

  • CreateCallResponse – When our OMS system responds back to BizTalk, it will include the same msgid as the field that was included in the request.  This field will also be promoted.  The other two elements(ETR and OrderNumber) we distinguish them so that we can pass them off to the SignalR Helper easily.

image

We will also need two maps:

  • CallTakerRequest_to_CallTakerResponse – The purpose of this map is to generate a response that we can send to the Web Client.  We will simply use a couple functoids to set a status of “True” and provide a timestamp.

image

  • CallTakerRequest_to_CreateCallRequest – This map will take our request message from our Web App and then transform it into an instance of our OMS Create Call message.  For the msgid, I am simply hardcoding a value here to make my testing easier.  In real life you need to ensure you have a unique value.

image

  • We now need an Orchestration to tie all of these artifacts together.  The Orchestration is pretty straight forward.  However, as I mentioned in the CreateCall schemas that we have promoted the msgid element.  The reason for this is that when we receive the message back from OMS system that we want it to match up with the same Request instance that was sent to OMS. To support this we need to create a CorrelationType and CorrelationSet.

image

The final Expression shape, identified by ‘Send SignalR Update’ is of particular interest to us since we will need to call a helper method that will send our update to our Web Application via that SignalR API.

image

This is a good segway into diving into the C# Class Library Project called BizTalkRHelper.

BizTalkRHelperProject

Since we are going to start interfacing with SignalR within this project, we are going to need a few project references which we can get from NuGet.  Although, please recall that we need a signed SignalR.Client assembly so we will need to compile this source code and then use a Strong Name key.  This can be the same key as the one that was used in the BizTalk project.  As I mentioned before, we need to GAC this assembly, hence us requiring the Strong Name Key.  We will also need to GAC the Newtonsoft.Json assembly but this does not require any additional signing on our part.

Otherwise we can use the assemblies that are provided as part of the NuGet packages.

image

This project includes two classes:

  • Message – This class is used as our strongly typed message that we will send to our web app.

image

  • CallTakerNotification – Within this class we will establish a connection to our HUB, construct an instance of our message that we want to send our client, provide the name of what you can think of as subscription and then send the message.  Obviously in a real world scenario hardcoding this URI is not a good idea.  You may also recognize that this is the method that we are going to be calling from BizTalk as we are providing the Estimated Time of Restore (ETR) and our OrderNumber that we received from our OMS system.  This is why we identified these elements in the CreateCallResponse message as being distinguished.  This also means that our BizTalk project will require a reference to this BizTalkRHelper project so that we can call this assembly from our Orchestration.

image

CallTakerWeb Project

This project will be used to store our Web Application artifacts. Once again with this project we need to get the SignalR dependencies.  I suggest using NuGet and search for SignalR.

image

Next, we need to add a couple classes to our project.  These classes are really where the “heavy lifting” is performed.  I use the term “heavy” lightly considering how few lines of code that we are actually writing vs the functionality that is being provided.   Note: I can’t take credit for these two classes as I have leveraged the following post: http://65.39.148.52/Articles/404662/SignalR-Group-Notifications.

  • Messenger – Provides helper methods that will allow us to:
    • Get All Messages
    • Broadcast a message
    • Get Clients

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;

using System.Collections.Concurrent;
using SignalR;

namespace CallTakerWeb
{
    public class Messenger
    {
        private readonly static Lazy<Messenger> _instance = new Lazy<Messenger>(() => new Messenger());
        private readonly ConcurrentDictionary<string, BizTalkRHelper.Message> _messages =
            new ConcurrentDictionary<string, BizTalkRHelper.Message>();

        private Messenger()
        {
        }

        /// <summary>
        /// Gets the instance.
        /// </summary>
        public static Messenger Instance
        {
            get
            {
                return _instance.Value;
            }
        }


        /// <summary>
        /// Gets all messages.
        /// </summary>
        /// <returns></returns>
        public IEnumerable<BizTalkRHelper.Message> GetAllMessages()
        {
            return _messages.Values;
        }

        /// <summary>
        /// Broads the cast message.
        /// </summary>
        /// <param name="message">The message.</param>
        public void BroadCastMessage(Object message, string group)
        {
            GetClients(group).add(message);
        }

        /// <summary>
        /// Gets the clients.
        /// </summary>
        /// <returns></returns>
        private static dynamic GetClients(string group)
        {
            var context = GlobalHost.ConnectionManager.GetHubContext<MessengerHub>();
            return context.Clients[group];
        }


    }
}

 

  • MessengerHub – Is used to:
    • Initialize an instance of our Hub
    • Add to a new group
    • Get All Messages
    • Broadcast a message to a group

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using SignalR.Hubs;
using BizTalkRHelper;

namespace CallTakerWeb
{
    [HubName("messenger")]
    public class MessengerHub : Hub
    {
        private readonly Messenger _messenger;

        public MessengerHub() : this(Messenger.Instance) { }

        /// <summary>
        /// Initializes a new instance of the <see cref="MessengerHub"/> class.
        /// </summary>
        /// <param name="messenger">The messenger.</param>
        public MessengerHub(Messenger messenger)
        {
            _messenger = messenger;

        }

        public void AddToGroup(string group)
        {
            this.Groups.Add(Context.ConnectionId, group);
        }

        /// <summary>
        /// Gets all messages.
        /// </summary>
        /// <returns></returns>
        public IEnumerable<BizTalkRHelper.Message> GetAllMessages()
        {
            return _messenger.GetAllMessages();
        }

        /// <summary>
        /// Broads the cast message.
        /// </summary>
        /// <param name="message">The message.</param>
        public void BroadCastMessage(Object message, string group)
        {
            _messenger.BroadCastMessage(message, group);
        }
    }
}

With our SignalR plumbing out of the way, we need to make some changes to our Site.Master page.  Since I am using the default Web Application project, it uses a Site.Master template.  We need to include some script references to some libraries.  By placing them here we only need to include them once and can use them on any other page that utilizes the Site.Master template.

<script src="Scripts/jquery-1.6.4.min.js" type="text/javascript"></script>
<script src="Scripts/BizTalkRMessengerHub.js" type="text/javascript"></script>
<script src="Scripts/jquery.signalR-0.5.2.js" type="text/javascript"></script>
<script src="../signalr/hubs"></script>

You may not recognize the second reference(BizTalkRMessengerHub.js) nor should you since it is custom.  I will further explore this file in a bit.

Next we want to modify the Default.aspx page.  We want to include some <div> tags so that we have placeholders for content that we will update via JQuery when we receive the message from BizTalk.

We also want to include a label called lblResults.  We will update this label based upon the acknowledgement that we receive back from BizTalk

<div class="callTakerDefault" id="callTaker" ></div>
<asp:Label ID="lblResults" runat="server" Text=""></asp:Label>
<div id="orderUpdate"> </div>
<div id="etr"> </div>
<div id="orderNumber"></div>

<br />
<h2>Please provide Customer details</h2>
<table>
    <tr>
        <td>Customer Name: <asp:TextBox ID="txtCustomer" runat="server"></asp:TextBox></td>   
    </tr>
    <tr>
        <td>Phone Number: <asp:TextBox ID="txtPhoneNumber" runat="server"></asp:TextBox> </td>
    </tr>
    <tr>
         <td>Customer Site ID: <asp:TextBox ID="txtCustomerSiteID" runat="server"></asp:TextBox></td>
    </tr>
    <tr>
        <td>Comments: <asp:TextBox ID="txtComments" runat="server"></asp:TextBox></td>
    </tr>
   </table>
   
  <asp:Button ID="Button1" runat="server" Text="Submit" onclick="Button1_Click" /><br />

 

The last piece of the puzzle is the BizTalkRMessengerHub.js file that I briefly mentioned. Within this file we will establish a connection to our hub, add ourselves to the CallTaker subscription and then get all related messages.

When we receive a message, we will use JQuery to update our div tags that we have embedded within our Default.aspx page.  We want to provide information like the Estimated Time of Restore and the Order Number that the OMS system provided.

$(function () {
    var messenger = $.connection.messenger // generate the client-side hub proxy { Initialized to Exposed Hub }


    function init() {
        messenger.addToGroup("CallTaker");
        return messenger.getAllMessages().done(function (message) {

        });
    }

    messenger.begin = function () {
        $("#callTaker").html('Call Taker Notification System is ready');

    };

    messenger.add = function (message) {
        //update divs
        $("#orderUpdate").html('Order has been updated');
        $("#etr").html('Estimated Time of restore is: ' + message.ETR);
        $("#orderNumber").html('Order Number: ' + message.OrderNumber);
      
        //Set custom backgrounds
        $("#orderUpdate").toggleClass("callTakerGreen");
        $("#etr").toggleClass("callTakerGreen");
        $("#orderNumber").toggleClass("callTakerGreen");

    };


    // Start the Connection
    $.connection.hub.start(function () {
        init().done(function () {
            messenger.begin();

        });
    });

 

});

 

Testing the Application

So once we have deployed our BizTalk application and configured our Send and Receive Ports we are ready to start testing. To do so we will:

  • Launch our Web Application.  The first thing that you may notice is that we have a <div> update indicating that our Notification System is ready.  What this means is that our browser has created a connection to our Hub and is now listening for messages.  This functionality was included in the JavaScript file that we just discussed.

image

  • Next we will populate the Customer form providing their details and then click the Submit button.

image

  • Once the button has been pressed we should receive an acknowledgement back from BizTalk and we will update the results label indicating that the Order has been received and that it is currently being processed.

image

  • You may recall that at this point we will start sending messages Asynchronously with the OMS system.  For the purpose of this blog post I am just using the FILE Adapter to communicate with the File System.  When I navigate to the folder that is specified in my Send Port, I see a newly created file with the following contents:

image

  • Ordinarily, the OMS system would send back an Acknowledgement message automatically but for this post, I am just going to mock one up and place it in the folder that my Receive Location is expecting.  You will notice that I am also using the same msgid to satisfy my Correlation Set criteria.

image

  • When BizTalk processes the CreateCallResponse, it will invoke our SignalR helper and a message will be sent to our Web Browser and it will subsequently be updated without any post backs or browser refreshes.  Below you will see 3 div tags being updated with this information that was passed from BizTalk. 

image

 

Conclusion

At this point I hope that you are impressed with SignalR.  I find it pretty amazing that we have other systems like BizTalk sending messages to our Web Application asynchronously without having the browser to be posted back or refreshed. I also think that this technology is a great way to bridge different synchronous/asynchronous messaging patterns.

I hope that I have provided a practical scenario that demonstrates how these two technologies can complement each other to provide a great user experience to end users.  We are seriously considering using this type of pattern in an upcoming project.  Since this was really my introduction to the technology and I do have some exploring to do but so far I am very happy with the results.


Win A Free Copy of Packt's Microsoft BizTalk Server 2010 Certification Guidebook

$
0
0
The author team is pleased to announce that we have teamed up with the publisher,Packt Publishing, and are organizing a give away.  Three lucky winners stand a chance to win an e-copy of our book.


Overview of Microsoft BizTalk Server 2010 Certification Guide
• Includes a comprehensive set of test questions and answers that will prepare you for the actual exam.

• The layout and content of the book closely matches that of the skills measured by the exam, which makes it easy to focus your learning and maximize your study time in areas where you need improvement.

Read more about this book and download free Sample Chapter: http://www.packtpub.com/mcts-microsoft-biztalk-server-2010-certification-guide/book

Also, feel free to check out some of the community reviews of the book:
How to Enter?
All you need to do is email MctsBTSBook@hotmail.com and let us know in a couple sentences why you would like to get your BizTalk Certification.

DeadLine:
The contest will close on Friday, September 7th, 2012 . Winners will be announced on this blog and will be contacted by email.

Packt MCTS BizTalk certification e-copy winners

$
0
0
This is a follow-up post to the Win a e-copy of the Packt MCTS BizTalk certification book post.  Thank-you to all that entered.  I enjoyed reading why you were interested in pursuing certification. The following people have won an e-copy of the book:
  • Johan Älverdal
  • Kevin Molloy
  • Donie Treadaway
I have forwarded your email addresses to the publisher and they will be in touch.

BizTalk 2010 R2 CTP: Azure Service Bus Integration–Part 1

$
0
0

Back in June 2012, I had the opportunity to attend TechEd North America.  At this event the BizTalk team gave us a glimpse into the next version of BizTalk and went over the Product Road map.  You can read more about this Roadmap session here.

One of the areas that Microsoft wanted to address was better/seamless integration with Azure and more specifically with Service Bus Queues and Topics.  The BizTalk team released a feature pack back in October 2010 that better enabled BizTalk to leverage the Service Bus Relay capabilities.  This feature pack does work well but did not allow for connectivity to Service Bus Queues and Topics since they weren’t even available back then.

In the fall of 2011, the talented Paolo Salvatori wrote a very detailed article on how you can integrate BizTalk 2010 with Service Bus Queues and Topics.  While Paolo’s solution does work it does require some additional effort and some people may be a little overwhelmed by the solution.  But I do give credit to Microsoft and Paolo for coming up with a solution considering BizTalk 2010 was released much before Service Bus Queues and Topics were commercially available.  Their solution just validates why BizTalk leveraging WCF is a good idea.  When investments are made to WCF, BizTalk usually benefits. All in all, it was a good stop-gap for anyone desperate to integration BizTalk 2010 with Azure.

Fast forward to July 2012 when Microsoft released this BizTalk 2010 R2 CTP.  Microsoft has delivered on making integration with Service Bus Queues and Topics very simple.  The BizTalk team recently released a blog post which provides an overview of some of these new features.  I thought it would be beneficial to provide a walk through for anyone interested in more details than what Microsoft included in that post.

Scenario

The scenario that we are about to explore includes a client application that will publish a typed Brokered message from a Console application to a Service Bus Queue.  BizTalk will then use the new SB-Messaging adapter to retrieve the message and simply write it to the file system.  As an experienced BizTalk guy, I like strongly typed messages and I am not afraid to admit it.  So as part of this solution I am going to include a strongly typed BizTalk schema that I am going to deploy.  For this walkthrough I am not going to transform this message but for anyone familiar with BizTalk they will be able to take this solution adapt it for their needs.

Client Application

  • Launch Visual Studio 2012 and create a C# Console application.  I called my application BrokeredMessageToBizTalk

image

  • Next I will use the Nuget Package manager and installing the Windows Azure Service Bus package.  You can access Nuget by clicking the following within Visual Studio: Tools - Library Package Manager - Manage Nuget Packages for Solution.

image

  • Since I want deal with typed messages I am going to create a class called PowerOut.  Since I work in the Power Industry I will over-simplify a use case that involves a customer whose power is out.  They will send a message from a client application (it could be a web page, mobile phone app etc) to a Service Bus Queue.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace BrokeredMessageToBizTalk
{
    public class PowerOut
    {
        public string CustomerName;
        public string PhoneNumber;
        public string Address;
       
    }
}

  • Within our Program.cs file we want to include the following code:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Microsoft.ServiceBus;
using Microsoft.ServiceBus.Messaging;
using System.Runtime.Serialization;
using System.IO;

namespace BrokeredMessageToBizTalk
{
    class Sender
    {
   
        const string QueueName = "PowerOutageQueue";
        static string ServiceNamespace = "YOUR_NAMESPACE";
        static string IssuerName ="owner";
        static string IssuerKey = "YOUR_KEY”;

        static void Main(string[] args)
        {
            //*****************************************************************************************************
            //                                   Get Credentials
            //*****************************************************************************************************          
            TokenProvider credentials = TokenProvider.CreateSharedSecretTokenProvider(Sender.IssuerName, Sender.IssuerKey);
            Uri serviceUri = ServiceBusEnvironment.CreateServiceUri("sb", Sender.ServiceNamespace, string.Empty);

            MessagingFactory factory = null;

            try
            {
                //***************************************************************************************************
                //                                   Management Operations
                //***************************************************************************************************       
                NamespaceManager namespaceClient = new NamespaceManager(serviceUri, credentials);
                if (namespaceClient == null)
                {
                    Console.WriteLine("\nUnexpected Error: NamespaceManager is NULL");
                    return;
                }

                Console.WriteLine("\nCreating Queue '{0}'...", Sender.QueueName);

                // Delete if exists
                if (namespaceClient.QueueExists(Sender.QueueName))
                {
                    namespaceClient.DeleteQueue(Sender.QueueName);
                }

                namespaceClient.CreateQueue(Sender.QueueName);

                //***************************************************************************************************
                //                                   Runtime Operations
                //***************************************************************************************************
                factory = MessagingFactory.Create(serviceUri, credentials);

                QueueClient myQueueClient = factory.CreateQueueClient(Sender.QueueName);

                //***************************************************************************************************
                //                                   Sending messages to a Queue
                //***************************************************************************************************
               

                Console.WriteLine("\nSending messages to Queue...");

                //Create new instance of PowerOut object
                PowerOut po = new PowerOut();
                po.CustomerName = "Stephen Harper";
                po.PhoneNumber = "613-123-4567";
                po.Address = "24 Sussex Drive";

                BrokeredMessage message = new BrokeredMessage(po, new DataContractSerializer(typeof(PowerOut)));
              
                myQueueClient.Send(message);
             

                //Uncomment this code if you want to write a sample file to disk

                //using (FileStream writer = new FileStream("c:/temp/file.xml",FileMode.Create, FileAccess.Write))
                //{
                //    DataContractSerializer ser = new DataContractSerializer(typeof(PowerOut));
                //    ser.WriteObject(writer, po);
                //}

                Console.WriteLine("\nAfter running the entire sample, press ENTER to exit.");
                Console.ReadLine();
            }
            catch (Exception e)
            {
                Console.WriteLine("Unexpected exception {0}", e.ToString());
                throw;
            }
            finally
            {
                // Closing factory close all entities created from the factory.
                if(factory != null)
                    factory.Close();
            }
           
        }

    }
}

Of the code above I want to highlight a couple different lines:

  • The first one deals with the DataContractSerializer as seen below.        

BrokeredMessage message = new BrokeredMessage(po, new DataContractSerializer(typeof(PowerOut)));

If you do not use a DataContractSerializer you can expect undesirable results when BizTalk retrieves the message from the queue.  As mentioned in the recent BizTalk team blog post: “Brokered Message .NET API uses Binary encoding. To avoid this issue, you will need to use Text by explicitly provide your own serializer, instead of the default serializer.”

  • The next deals with the few lines that have been commented out.  Since I want to use typed messages within BizTalk, I can generate a sample XML message using the code below.  This will allow me to generate a BizTalk schema using tools provided within Visual Studio.

    //using (FileStream writer = new FileStream("c:/temp/file.xml",FileMode.Create, FileAccess.Write))
                //{
                //    DataContractSerializer ser = new DataContractSerializer(typeof(PowerOut));
                //    ser.WriteObject(writer, po);
                //}

*As a side note – wouldn’t it be nice if BizTalk supported native .Net Classes (from a messaging perspective) - hint, hint *

BizTalk Application

We can now create a BizTalk application.  Since we are using the new BizTalk 2010 R2 CTP we can also use the latest version of Visual Studio 2012.  As I mentioned earlier I want to process typed messages so our BizTalk solution will be very simple.  It will only include a Schema.  We will deploy this message to BizTalk so that when an instance of this message is published to the MessageBox that we will have a known schema deployed that will match this message type.

  • We can now create a new BizTalk application. I have called mine PowerOutage and I have also added a Strong Name Key called PowerOutage.snk.

image

  • Next I want to create a new Schema based upon the sample file that we previously generated.  I can create this new schema by right mouse clicking on BizTalk project (PowerOutage) - Add - Add Generated Items.
  • When prompted, click on the Generate Schemas label and then click the Add button.

image

  • Select Well-Formed XML from the Document type dropdown and then we need to provide the name of our sample file.  Click OK to proceed.

image

  • We will now have a schema added to our solution that represents our PowerOutage class.

image

  • Deploy our BizTalk Application
  • When we launch the BizTalk Admin Console we will discover our PowerOutage application.
  • We now need to create a Receive Port and corresponding Receive Location.  In this situation we are going to use the SB-Messaging Adapter.

image

  • When we click the Configure button we will have a few more properties to fill out including our URL.  Our URL is going to include our Namespace (highlighted in Green) and our QueueName (highlighted in Orange)

image

  • Next we need to click on the Authentication tab.  Within this tab we will provide our Namespace as it relates to the Access Control Servers (ACS), an our Issuer Name and Key.

image

  • The Properties tab is not used in this example.  I will further examine it in a later post.
  • With our Receive Port and Receive Location created we can no move on to our Send Port.  For this example we are simply going to create a File Drop where we can write out the file that we have received from the Service Bus Queue.

image

  • Since we do not have any Orchestrations we do need to wire up a subscription for our inbound message.  In order to do this we will simply create a “Send Port Subscription” by setting filter.

image

  • We can now Start our BizTalk application and bounce our Host Instance(if applicable)

Testing our scenario

  • Next, launch our Console Application and we will discover that our message has been sent to our Queue.

image

  • If we check the File Drop that was specified in our Send Port we should see a newly created file.  When we open this file we should recognize the content that we populated in our Console application.  Since we now have typed data within BizTalk it will be easy to transform it into other message types so that we can exchange data with other systems such as Line of Business (LOB) systems.

image

Conclusion

Now that wasn’t so bad was it?  For experienced BizTalk people this process should be a breeze.  The only area that initially hung me up was the DataContractSerialzer that is specified in our console application.  The other good news is that we are just scratching the surface in this blog post.  Look for more posts related to BizTalk and Service Bus integration using the new BizTalk 2010 R2 CTP.

BizTalk 2010 R2 CTP: Azure Service Bus Integration–Part 2 Brokered Message Properties

$
0
0

In my last post I provided a walkthrough that allows you to send a typed Brokered Message from a Console application to a Service Bus Queue,  have BizTalk retrieve this message and then write it to disk.  I am now going to expand upon that scenario and describe how we can leverage Brokered Message properties within BizTalk to route the message to different locations using BizTalk’s promoted properties.
What is a Brokered Message Property?
In many ways a Brokered Message Property is very similar to a Promoted Property within BizTalk.  These properties can be used to capture meta-data outside the body of the message.  We can then use these properties for routing within the Service Bus when delivering messages to different Service Bus Topics. It is important to note that we don’t have to use these properties for just routing.  We can also use them as part of business logic in downstream systems if we so desire.
Why is this important for BizTalk?
As I mentioned in the previous paragraph we can use Promoted Properties within BizTalk to route messages and we can also use it to capture meta data if we want (although you should look at distinguished fields instead if that is your intent).  In the BizTalk 2010 R2 CTP there is now support for transitioning Brokered Messages Properties from Service Bus Queue clients to BizTalk promoted properties.  BizTalk applications themselves do not understand a Brokered Message property, but BizTalk will convert these Brokered Message Properties into BizTalk Promoted Properties where they can be used to route messages.
Scenario Walkthrough
In my previous blog post I used a Power Outage scenario.  My client application would pass along customer information to a Service Bus Queue and then BizTalk would pick that message up and write it to disk.  In a ‘real life’ scenario I would have routed that message to a Customer Information System (CIS) or a Work Order Management (WOM) system so that a field operations team could address the power outage.  In this walkthrough I am going to build upon that scenario.  This difference this time around is that I am going to introduce a Brokered Message Property called isCriticalCustomer.  I hate to publicly admit it but not all customers are treated equally when it comes to delivering power.  An example of a Critical Customer may be a hospital.  It is more important for a Power company to get their power on before yours.  A patient’s respirator is more important that someone watching the latest American Idol episode. 
Within my Console application this isCriticalCustomer property will be set as a Brokered Message Property.  When this message is retrieved by BizTalk this property will be converted into a Promoted Property and BizTalk will then use that Promoted Property to route the message to a different destination.
Note: A person with a lot of Service Bus experience may say why don’t you just use Topics?  I could have a Topic for regular customers and a Topic for Critical Customers.  This is also a valid pattern but for the purposes of demonstrating BizTalk capabilities I will leave the routing to BizTalk.
Modifying Queue Client
I am not going to display all of the code required for this client to work.  I am going to be adopting the code I listed in my previous post.  So please refer back to that post for the starting point.  I will include any areas within this post where I have made changes.
In the code below I am going to create and send two messages.  In red you will discover that I am setting a Brokered Message Property called isCriticalCustomer. In the first message I am indicating that this is not a critical customer (aka a regular customer).  In the second message I am saying that it will be a Critical Customer.  Once we get to the BizTalk section you will see how we can use this property to route the message within BizTalk.
              //Create new instance of PowerOut object
              //This customer will not be a Critical Customer
              PowerOut po = new PowerOut();
              po.CustomerName = "Stephen Harper";
              po.PhoneNumber = "613-123-4567";
              po.Address = "24 Sussex Drive";

              BrokeredMessage message = new BrokeredMessage(po, new DataContractSerializer(typeof(PowerOut)));
              message.Properties.Add("isCriticalCustomer", false);
              myQueueClient.Send(message);

              //Create new instance of PowerOut object
              //This customer will  be a Critical Customer
              po = new PowerOut();
              po.CustomerName = "Calgary General Hospital";
              po.PhoneNumber = "403-123-4567";
              po.Address = "1 Red Mile Drive";

              message = new BrokeredMessage(po, new DataContractSerializer(typeof(PowerOut)));
              message.Properties.Add("isCriticalCustomer", true);
              myQueueClient.Send(message);

BizTalk Modifications
You may recall from my previous post that my BizTalk solution was very simple as I only had a schema that represented this Customer message being sent from my client application.  So in order to support our new scenario I only need to add one more artifact to my solution and that is a Property Schema.  The reason why I need to add this schema is that I need an artifact within my BizTalk application to “hold” these values as they are being populated when BizTalk receives the message.  This is no different than when you want take a value from one of your “regular” BizTalk schemas and turn it into a Promoted Property.
Within our BizTalk solution we need to do the following:
  • Add a PropertySchema to our project.  Once it has been added there will be a default property that we will rename to isCriticalCustomer and change the data type to be a boolean.
image
  • We now need to re-deploy our application.
  • Open up the ReceiveBMfromServiceBus Receive Location, click the Configure button.  Now click on the Properties tab.  Within this tab we are going to specify our namespace for our PropertySchema.  If you are unsure where you can get this namespace from, look in the image above and notice the value of the targetNamespace matches the value that I have put in this text box.  We also need to ensure that the Promote Brokered Message Properties checkbox is checked on.
image
  • Next we are going to remove our previous Send Port and create two new Send Ports.  One send port will be for Regular Customers and the other will be created for Critical Customers. 
  • Below is the Send Port for regular customers.  Notice that a separate sub-folder called RegularCustomers has been created for these files.
image
  • Click on the Filters label and then add a new Property.  You will notice that within the dropdown list you will find the property that we created in our PropertySchema called isCriticalCustomer.  We need to select this value and then set the value to false.
image
Note: When you pull down the Property drop down you will also discover the Out of the Box Service Bus Brokered Message properties.  These properties are out of the scope of this post but it is something that may be beneficial down the road. 
  • We now want to perform similar actions to our other send Port that will be use to send our CriticalCustomer messages.
image
  • Once again we are going to click on the Filters label.  We will use the isCriticalCustomer property again but this time we will set the Value to true.
image
  • We can now bounce any affected Host Instance(s) and start our application.
Testing our Application
As you my recall, we modified our Console application so that it will send two messages to the same PowerOutage queue.  In the first message, we set the isCriticalCustomer Brokered Message property to false.  In the second message, for the hospital, we set it to true.  The end result is that we should receive one message in our Regular Customers folder and one in our Critical Customers folder.
  • As promised when I run the application I will find one message in each corresponding folder:
image
  • If I open the files I will discover that the right message was delivered to the correct folder:
image
Conclusion
Overall it is a pretty slick, and seamless, experience.  I think the BizTalk product team has done a great job in bridging the Service Bus Brokered Messaging Property with BizTalk’s Promoted Property.  In my opinion, the Azure Service Bus and BizTalk Server really complement each other by providing robust Hybrid solutions.  It is great to see smooth interoperability between these two technology sets.
This concludes this blog post.  I am not done yet with this series as I have still just scratched the surface.  I plan on writing more about my experience with Sending messages to Service Bus Queues/Topics from BizTalk and perhaps dive into some different messaging patterns.

BizTalk 2010 R2 CTP: Azure Service Bus Integration–Part 3 Sending message to Service Bus Queue

$
0
0
This is the third post  in a series where I discuss integrating BizTalk 2010 R2 CTP with the Azure Service Bus.  In Part 1 I discussed BizTalk receiving a message from a Service Bus Queue.  In Part 2 I expanded on the scenario that I described in Part 1 but added support for Brokered Messaging Properties.  In this post I am going to switch gears and have BizTalk send messages to a Service Bus Queue.

Scenario
In my previous posts, I have focused on Power Outage scenarios as this is an industry that I am very familiar with.  In Post 2, I discussed situations where we may have critical customers and we want Power Outage incident tickets, when critical customers are involved, to have a higher priority. Whenever you are dealing with Critical Customers you generally have dire circumstances for getting their power restored as quickly as possible.  Whether you are dealing with Hospital,s where people could die, or Oil and Gas operations where being down may result in revenue losses in the hundreds of thousands of dollars per hour (or more).  Often times, Power Delivery organizations will have Major Account Representatives (or MAR for short).  These people are responsible for maintaining business relationships with these types of customers to ensure service levels and expectations are being met.

In Part 2, we left off delivering messages to a Work Order Management system that can dispatch these trouble events to field operations staff to address.  We created two separate paths: one for regular customers and one for critical customers.  The intention of this post is to have the Work Order Management system provide a message back to BizTalk that BizTalk can push to an “Estimated Time of Restore” queue.  A MAR account application can then subscribe to these events.  Having this information at the MAR’s fingertips will allow them to contact the customer to give them the bad, or good, news.

Service Bus Client
  • A new C# console application has been added to our solution from the previous posts called BrokeredMessageFromBizTalk.
image
  • Next we are going to add a class called EstimatedTimeToRestore.  It looks an awful lot like our PowerOut class from our previous scenario with the major difference being the RestoreTime property.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace BrokeredMessageFromBizTalk
{
    public class EstimatedTimeToRestore
    {
        public string CustomerName;
        public string PhoneNumber;
        public string Address;
        public DateTime RestoreTime;
    }
}
  • Below is the code that I have placed in Program.cs. I do have a section of code commented out that will allow us to write out an instance of EstimatedTimeToRestore to disk. We will want a copy of this file so that we can use it within BizTalk to generate an XSD schema.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.IO;
using System.Runtime.Serialization;
using Microsoft.ServiceBus;
using Microsoft.ServiceBus.Messaging;

namespace BrokeredMessageFromBizTalk
{
    class Receiver
    {
        const string QueueName = "<queue_name>";
        static string ServiceNamespace = "<your_namespace>";
        static string IssuerName = "owner";
        static string IssuerKey = "<your_key>";

        static void Main(string[] args)
        {

            //Uncomment this code if you want to write an instance of your data class to disk
            //EstimatedTimeToRestore etr = new EstimatedTimeToRestore();
            //etr.Address = "1 Calgary Way";
            //etr.CustomerName = "General Hospital";
            //etr.PhoneNumber = "403-1234567";
            //etr.RestoreTime = DateTime.Now;
            //using (FileStream writer = new FileStream("c:/temp/ETRfile.xml",FileMode.Create, FileAccess.Write))
            //{
            //    DataContractSerializer ser = new DataContractSerializer(typeof(EstimatedTimeToRestore));
            //    ser.WriteObject(writer, etr);
            //}
          
         //Create instance of tokenProvider using our credentials
            TokenProvider tokenProvider = TokenProvider.CreateSharedSecretTokenProvider(
                Receiver.IssuerName, Receiver.IssuerKey);
            Uri uri = ServiceBusEnvironment.CreateServiceUri("sb", Receiver.ServiceNamespace, string.Empty);
            MessagingFactory messagingFactory = MessagingFactory.Create(uri, tokenProvider);
            QueueClient qc =  messagingFactory.CreateQueueClient(Receiver.QueueName, ReceiveMode.ReceiveAndDelete);
            BrokeredMessage bm;
            while ((bm = qc.Receive(new TimeSpan(hours: 0, minutes: 0, seconds: 5))) != null)
            {
                var data = bm.GetBody<EstimatedTimeToRestore>(new DataContractSerializer(typeof(EstimatedTimeToRestore)));
                Console.WriteLine(String.Format("An estimated time of restore {0} has been received for {1}", data.RestoreTime, data.CustomerName));
            }
        }
    }
}
  • Most of this code is nothing that you haven’t seen before as part of SDKs or other Blog posts out there.  The one line that you do need to be aware where we serialize the message that was pulled off of the Queue into an instance of our EstimatedTimeOfResponse class.  This allows us to use a typed response.  You may recall from Post 1 and 2 that we were using this same Serializer when pushing messages to the Queue so that BizTalk will be able to receive typed messages from the Queue.  The process is no different on the receive side.
var data = bm.GetBody<EstimatedTimeToRestore>(new DataContractSerializer(typeof(EstimatedTimeToRestore)));

Creating PowerRestoreQueue
In our previous examples, we had the client application ensure that our Queue existed prior to putting a message in it.  This time around we are going to use the Azure Service Bus Portal to perform this operation.

As of this writing, all Service Bus configuration occurs within the “old” portal.  To add a queue, select your namespace and then click on the New Queue button.  You will then need to provide a name and configure optional properties, if so desired, and then click the OK button.  You will then see the queue has been added successfully.  For the purpose of this example we are going to use a name of powerrestore.

image

Modifying BizTalk Solution
  • Once again we are going to want to generate a typed XSD schema based upon the sample file that was generated by our Queue Client Code.    We can do so by:
    • Right mouse clicking on BizTalk project (PowerOutage) - Add - Add Generated Items.
    • When prompted, click on the Generate Schemas label and then click the Add button.
    • Select Well-Formed XML from the Document type dropdown and then we need to provide the name of our sample file.  Click OK to proceed.
  • We now need to re-deploy our BizTalk application by right mouse clicking on our Solution and clicking Deploy Solution.
  • Within the BizTalk Administration Console we need to now add a Receive Port and Receive location.  There isn’t anything super special about this Receive Location.  We will use the FILE Adapter, the XML Receive Pipeline and will use a local file system URI.  Do make a note of the Receive PortName as we will use it as a filter in our Send Port that we are about to create.
image
  • Create a new Send Port using the new SB-Messaging Adapter and specify the PassThruTransmit Send pipeline.
image
  • Click the Configure button.  We now need to specify our Destination URL including our Namespace(underlined in green) and the name of our queue (underlined in red).
image
  • Next, click the Authentication tab and modify the Access Control STS URI. We need to provide our namespace (underlined in green).  Be sure to leave the –sb string after the namespace.
image

  • Lastly, we need to create a Filter so that when BizTalk receives a message from our Receive Port, that we previously configured, that we send it to the Azure Service Bus queue.  To do this we need to click on Filters and then select the BTS.ReceivePortName  property while specifying  our Receive Port Value Name.
image

Testing our Solution
In order to test our application, we will focus on the BizTalk aspects first. 
  • We will want to drop our sample XML file, that we previously generated from our Queue client, and drop it in our BizTalk receive location folder:
image
  • BizTalk will now pick up this file and deliver it to our Service Bus Queue.
  • Next, we can start up an instance of our Queue Client.  We will soon discover that our message has been dequeued and that our console application has been updated informing us that an updated Estimated time of restore has been provided for us.
image


Conclusion
Once again, a very seemless experience when taking your existing BizTalk skills and using them to create Cloud or Hybrid solutions.  The only real area that you need to be concerned with is the Serializing of messages that you are putting or pulling from the queue.

This series will continue, I have a few other areas that I plan on exploring related to Azure Service Bus and BizTalk integration. Stay tuned...


Viewing all 75 articles
Browse latest View live