img not found!

Closing our Data Centre

Data Centre – The End of an Era

We recently decommissioned our data centre. For the last 15 years we have been hosting customers’ workloads on our own equipment. This started in Docklands, London and then we moved to various locations in Manchester. The system hosted line-of-business applications, dedicated virtual machines, Windows and Linux website hosting servers, VoIP servers, VPN servers, mail filters and a variety of other little services. We had five 1Gbit/s Internet feeds from three different backbone providers with our own BGP routers and IP allocation.

This solution served us well for many, many years but it was time for a refresh, so we had to take a long hard look at what we were doing. Replacing a six-node virtualisation cluster with 10Gbit/s storage is a costly exercise. Having our own equipment and facility means that monthly costs are fixed but the initial outlay is high. The facility also gets old over time and eventually needs to be replaced again.

So, how about using a public cloud?…
  • Let someone else worry about disk crashes, connectivity issues, firewalling, upgrades, etc
  • Just pay each month for what we consume
  • No expensive capital costs
  • Advanced features like AI, IoT, replication and multiple regions
But what about doing it ourselves?…
  • Costs of our own facility are fixed, so profits are good if we have many customers
  • Being the ‘master of our own ship’ means we can fix problems ourselves and not be reliant on an external provider
  • Offer interesting, unique services
  • Differentiate ourselves from other IT providers

In the end, factoring-in the capital expense, the power and flexibility of the cloud and not wanting to worry anymore about equipment failures, we opted to migrate to the cloud.

As a certified Microsoft Partner, we had already worked with their general-purpose public cloud, Microsoft Azure, so that was the natural choice.

Over a period of months, we migrated customers’ VMs and LoB applications to Azure in London. They all benefited immediately from modern infrastructure, especially noticeable was the performance increase afforded by using SSD-based virtual machines. Their VMs are now up-to-date Windows Server 2019 installations behind very secure firewalls and access policies.

Our development team also migrated a series of server-based ASP.NET web applications to Azure’s Platform-as-a-Service solution. This means that those services just run inside the Azure cloud, no need to configure servers, upgrade them, protect them, we just let Microsoft do all of that. It is the same for the SQL Server databases, you choose how fast you need the database to go and pay for that service; no expensive ‘per-core’ licensing or performance limitations.

Everything is then backed-up or mirrored to Microsoft’s other UK data centre in Cardiff… this is something that would simply be impossible for us to afford with our own equipment.

The solution has been running for a few months and it is great. Customers are happy with the performance of the applications and VMs. The PaaS platform means that reliability of websites and web apps is 100%; this too would be impossible for our own data centre as hosting machines must be restarted each month for their security updates.

All-in-all, the migration from our DC to Azure has been great. Reliability is up, performance is up, customer satisfaction is up, whilst costs and sleepless nights are down! We are sad that we no-longer have our own servers and solution, but Azure is big, better and more flexible, so we are OK.

The Future

Azure allows us to do things that were previously out-of-reach, such as…

  • AI Solutions – Machine learning in your own DC is basically impossible
  • Multi-site Replication – Replicating VMs between DCs is possible but you need an entire second data centre with a second set of costs
  • Global Reach – Our PaaS services can be positioned near to customers in any part of the world, whereas our data centre was just in Manchester

“Bye bye DC1. We’ll miss you, but your time had passed.”

PS. Special mention must go to HP for making such brilliant hardware. Every server worked beautifully for ten years or more. Splendid work guys!

XenServer-based six node virtualisation cluster using (mostly) HP ProLiant hardware.

Jason Timmins, our Technical Director, stand in the rack that used to house our hosting facility. At his feet are five 1,000Mbit/s Internet connections.

AI , Cloud

How to Build a Recommendations Service in Azure and Microsoft Cognitive Services

How to Build an AI-based Recommendations Service in Azure and Microsoft Cognitive Services

Overview

We’ve all seen Amazon’s clever ‘Customers who bought this item also bought…’ feature on their website. Amazon use it to upsell and to expose parts of their catalogue that customers might not know about. It’s clear that Amazon spent a great-deal of time designing and implementing their solution but it is now within reach of small organisations too thanks to services like the one we’ll show here.

How it Works

You might be able to imagine for yourself how this kind of recommendations engine (or collaborative filter, to give it its technical name) works. In Amazon’s case, they upload vast quantities of sales data to an AI they’ve built and then, when you’re browsing their products, they use the AI to see what other products people bought who also bought the product you’re looking at. The site then shows this list as scrollable table on the product page. This all happens instantly in real-time.

In our example, we too will upload raw sales data, train the AI and then create a web service we can use in our website. This is done using the Microsoft Cognitive Services Recommendations solutions template running in Microsoft Azure.

Getting Started

You’ll need a working Azure account. If you have one already you can use that, otherwise, you can get a free Azure account here. Please sign-in to your Azure account now.

The Recommendations Solution Template

Go here to find the template we’ll be using. Here’s what it looks like at the time of writing…

Deploying the Solution

It’s important to understand that almost everything you do in Azure costs money and this is true of the resources created for this solution. Please give careful consideration to this as we move through the deployment steps.

Click the ‘Deploy’ button to start the process.

This is step one of the Cortana Intelligence Deployment wizard…

Complete the form to your requirements, here we’ve given the deployment a name (this will be used to create an Azure Resource Group), chosen our subscription and a location. Hit Create.

This page tells us what needs to be created for the solution to run, an App Service account, a Storage Account and Application Insights to monitor how it’s doing…

Click Next to move along.

This page asks which kind of App Service you want to create. Feel free to review their different specifications as this will affect the performance and functionality of the service.

For our purposes a Basic 1 account is fine. Click Next.

The next page is about replication in the Storage Account. You can either choose Local and Geo replication.

For our purposes, LRS (local replication) is fine. Click Next.

The next page is about Application Insights. It’s this that allows you to see the performance and usage of your new recommendations service.

Choose you closest region and click Next.

The template will now deploy the solution into your Azure account, it takes a couple of minutes and then you’ll get an Email and a new page with some important information on it…

All of those URLs and keys can be found within the resources created by the wizard but this is a handy consolidated view of all the important information so you might want to copy-and-paste it into a document for safe keeping.

Our New Recommendations Service Resource Group

Go to your Azure portal and look at your resource groups. You should see a new one named from the first field on the first page of the deployment wizard, in our case, recsrvc1. Here’s what you’ll find inside…

These are a set of pretty standard resources, an App Service Plan and its associated App Service, Application Insights and a Storage Account.

Loading Some Data

The wizard has deployed all of the back-end code into the Azure resources for you, so let’s take a look. From the final step of the deployment wizard, mark the Admin Key and copy it to your clipboard. Then click on the Recommendations UI link. You should see something like this…

Paste the Admin API Key into the field and click Login.

You’ll now see an empty recommendations solution with no model…

Train Your First Model

Click on Train New Model and let’s see what we need to do…

Hmm, looks a bit complicated doesn’t it but fear not, here’s what we need to do.

  1. Prepare your sales data
  2. Prepare your Storage Account
  3. Upload your sales data
  4. Fill-out the form
  5. Click ‘Train’
Prepare Your Sales Data

This page contains everything you need to know about this solution including the source code and some examples. The Getting Started Guide is pretty easy to follow. However, we’re interested in getting our sales data into the correct format, this document talks about that. Microsoft refer to the data as Usage Events, simply, it’s a big list of what you sold, who to and when.

The Usage file is a comma separated value (CSV) file in this format…

<UserID>,<ItemID>,<Time>

So an example might look like this…

456,RedBucketSmall,2018-01-09T16:05:26
456,BlueBucketSmall,2018-01-09T16:05:26
998,RedBucketLarge,2018-01-09T16:35:24

Here you can see that customer ID 456 bought a small red bucket and a small blue bucket at the same time (presumably on the same order), and then, shortly after, customer ID 998 bought a large red bucket.

This trickiest part of this exercise is to get your sales data into this format. It’s just these three items that you need; who, what and when.

Naturally, if you can get your raw data into Excel, that’s a great place to work with it and get it into the correct format. Getting the time into the correct format can be tricky, however, you can select your time column, right-click and choose Format Cells then ‘Custom’ and enter ‘dd/mm/yyyyThh:mm:ss’ into the Type field, this will format the time into the required format.

Once you’ve got your sales data into the correct format, you need to export it as a CSV to your local machine.

Preparing your Storage Account

Now that we’ve got our data in the correct format, we need to prepare the Azure Storage Account to hold the data. A convenient tool for this is the Azure Storage Explorer, this is a local application that allows you to interact with your Storage Account directly. Download and install Storage Explorer from here.

Open Storage Explorer, add your Azure Account and sign in.

The interface is much like with Windows File Explorer, you’ll see you Azure accounts on the left, navigate to your subscription, then the new Storage Account and finally the Blob Containers. You can see that that deployment wizard has already created some containers (rather like Windows folders) for the internal parts of the AI models, etc.

Right-click on ‘Blob Containers’, choose ‘Create Blob Container’ and call it ‘sales-data’. Then click on the new ‘sales-data’ container, click on the ‘New Folder’ icon in the toolbar and create a folder called ‘usage’.

Upload Your Sales Data

You now need to upload your sales data CSV file into that ‘usage’ container. Choose ‘Upload’ from the toolbar and select that you want to upload some files.

Use the three dots to find the sales data file you created and upload it. Our file is called UsageExport.csv…

Fill-out the Model Training Form

Now that we have our usage data in-place, let’s return to the New Model and complete it as best we can. Here’s what the completed form should look like; we’ve used ‘sales-data’ as the container name and ‘usage’ as the sub-folder. (Interestingly, the model will import all the files it finds in that folder but we just have one for now.)

I usually change, ‘Co-occurrence Unit’ to ‘Timestamp’ and set the ‘Decay Period’ to 60 days. You can view all the details for these options using the ‘Model Parameters Documentation’ link.

Click ‘Train’

Time to train the model, click ‘Train’. The model will now be created and trained using your data. It doesn’t take long for our 40,000 row data file but that will vary based on your dataset. Here’s how it looks when it’s finished…

Let’s Try it Out!

Click ‘Score’ to open a page that let’s you play with your newly created recommendations service. Try entering one your popular product IDs into the field and click ‘Get Recommendations’. Product ID 7 is popular for us, here’s what we see…

The results field shows a sorted list of products bought by other customers who also bought product ID 7. The Score value represents how confident the AI is about its decision.

If you enter a list of product IDs, like you would if the customer had items in their website shopping cart, you receive a set of results based upon all their items…

See how the list updates and the confidence score changes?

Your Website

In the real-world you’d need this to be integrated into your own e-commerce website and for that you’d use your web developers. The newly-trained model can be accessed using a REST-based API. It’s beyond the scope of this document but we can help with that integration and with the routine sales data updates that are needed to keep the model relevant.

Extra Stuff

What you’ve seen here is actually just the tip of the iceberg, this recommendation service has many customisable parameters and advanced options:-

  • You can, for instance, upload quoted items, clicked/browsed items, (as well as sold items) into the model but have them ‘weigh’ less in the AI’s decision making.
  • You can customise the results set for each specific customer based on their previous purchases.
  • You can artificially add weight to particular/new items to ‘promote’ them in the recommendations.
Overview

Adding a recommendations engine to your e-commerce website (or customer Emails*) could work very nicely for you. Amazon are tight-lipped about how well it works for them, saying only, “Our mission is to delight our customers by allowing them to serendipitously discover great products” but I think it’s clear to see that recommendations are critical to their e-commerce experience.

*-Don’t forget to use the recommendations engine to customise your newsletters for each of your customers. Amazon do this all the time too!

Next Step

We would be delighted to help you design, implement and maintain a recommendation service like this for you, please feel free to contact us or reach-out to me directly if you have any questions.

Author

Jason Timmins – Technical Director – MBM Ltd – 01902 324494 – jason@mbmltd.co.uk

IoT

Connecting TTN to Azure Functions

The Things Network and Microsoft Azure Functions

The Things Network LogoMicrosoft Azure Functions LogoOverview

We love The Things Network for low power LoRaWAN-based IoT hardware and we love Microsoft Azure Functions for server-less data processing. This article shows how to use them together.

The process involves creating an Azure Function to act as a webhook and then using the TTN webhook integration to send data to that function when data arrives from your devices. Further, the Azure Function can also send data back to the IoT device using the TTN integration, in the other direction (aka downlink.)

Create a Webhook-based Azure Function

Sign-in to your Azure Portal and add a ‘Function App’, choose the appropriate pricing model.

Inside your newly created Function App add a new function. This will show a list of function templates, find the HTTP Trigger function (in your preferred language – we’ll be using C#) and add it to the Function App. (You could probably use a Generic Webhook too but the Trigger template works nicely.)

The default sample code looks like this…

This function handles a web request, finds a ‘name’ parameter and returns a response saying ‘Hello Name’. However, in our case, we want to take the JSON that comes from the TTN integration and processes it. Firstly, you’ll need to add some libraries to the code, add this to the top of your code…

#r "Newtonsoft.Json"

using Newtonsoft.Json;
using System.Text;
using System.Net;
This will give us the JSON library for processing the returned JSON data and also the library necessary to send requests back to TTN.
Remove the body of the sample trigger and we’ll replace it with our own code.
Here’s how the body of the request from TTN can be captured…
// Get request body
dynamic reqbody = await req.Content.ReadAsAsync<object>();
byte[] data = Convert.FromBase64String((string)reqbody.payload_raw);
string decodedString = Encoding.UTF8.GetString(data);

This gives us a ‘reqbody’ object that contains all the data from TTN’s JSON and a ‘decodedString’ string that contains JSON of the data from your devices. The JSON structure of the TTN uplink can be found here – https://www.thethingsnetwork.org/docs/applications/http/. Most of the message is to do with the LoRaWAN and TTN network itself, it’s only “payload_raw” (and “payload_fields”) that actually contain the data from your devices. In our case, we only get “payload_raw”, this is a Base64-encoded JSON object which our code converts to ‘decodeString’ for use in your Azure Function.

This line will send the data from the devices and the TTN device ID to the log. (You can see the function logs by expanding the ‘Logs’ panel at the bottom of the function page.)

log.Info("Data: " + decodedString + " " + reqbody.dev_id);
Nothing will come through to the function at the moment because there’s no TTN integration so let’s do that.

Create a TTN Webhook Integration

Visit your TTN console and select that application you want to send data to Azure. Choose ‘Integrations’ from the menu bar and then ‘add integration’, the page looks like this…
Choose the ‘HTTP Integration’ and you’ll get it’s settings page…

You’ll want to take a look at the TTN documentation link as there’s useful stuff in there. For now, enter a sensible name of your integration in ‘Process ID’, then choose ‘default key’ in ‘Access Key’ field. The URL comes from your Azure Function so go back to your Azure portal and find the ‘</> Get function URL’ (top right) on your function source code page. Copy the URL to the clipboard, it’ll look something like this…

https://<your-app-service-name>.azurewebsites.net/api/<your-trigger-function-name>?code=<your-function-key>

Switch back to the TTN integration page and paste that URL into the ‘URL’ field. Leave all the other fields blank and save your new integration.

Uplink Testing

You now need to make your devices send some data, the TTN integration will then call your Azure Function using the URL you gave it and you’ll be able to see those function calls in the Log panel of the function portal. In our example a function log looks like this…

2017-12-15T15:24:52.146 Function started (Id=2d52f339-8c9c-487f-a58d-aad93349535f)
2017-12-15T15:24:52.146 C# HTTP trigger function processed a request.
2017-12-15T15:24:52.146 Data: {"distance": 52, "speed": "25"} pycom_lopy_01
2017-12-15T15:24:52.177 Function completed (Success, Id=2d52f339-8c9c-487f-a58d-aad93349535f, Duration=28ms)

You can see ‘distance’ and ‘speed’ data that came from the Pycom LoPy device in the TTN JSON’s message in the payload_raw element.

Sending Data Back – TTN Downlink

You can use your Azure Function (or Azure in general) to send data back to your IoT devices via the TTN integration. Each TTN JSON upload message has an element known as ‘downlink_url’, it’s this that contains the webhook URL to use to send a reply back to the device sending data. This next piece of code, builds a as JSON upload message in the correct format for TTN and sends it to the downlink_url mentioned in the initial message.

Here’s my code to send a reply (known as an downlink)…

 // Sending Reply

 // Get the downlink URL from the uplink message
 Uri ourUri = new Uri((string)reqbody.downlink_url);
 // Create a .NET web request
 WebRequest request = WebRequest.Create(ourUri);
 request.Method = "POST";

 // We're going to use a random number to set the colour of the LoPy's LED
 Random rnd = new Random();
 int r = rnd.Next(1,16777216);
 // Build the JSON that the device will interpret
 string replyJSON = @"{""colour"": " + r + "}";

 // Build the TTN JSON downlink message. Notice the Base64 conversion for the device message JSON
 string postData = @"{""dev_id"": """ + reqbody.dev_id + @""",""port"": " + reqbody.port + @", ""confirmed"": false, ""payload_raw"": """ + Convert.ToBase64String(Encoding.UTF8.GetBytes(replyJSON)) + @"""}";
 log.Info("Response: " + postData);
 byte[] byteArray = Encoding.UTF8.GetBytes(postData); 
 // Set the ContentType property of the WebRequest. 
 request.ContentType = "application/x-www-form-urlencoded"; 
 // Set the ContentLength property of the WebRequest. 
 request.ContentLength = byteArray.Length; 
 // Get the request stream. 
 Stream dataStream = request.GetRequestStream(); 
 // Write the data to the request stream. 
 dataStream.Write (byteArray, 0, byteArray.Length); 
 // Close the Stream object. 
 dataStream.Close();

 WebResponse response = request.GetResponse();

C# is not my first language so this could be prettier but it builds a downlink message and sends it back to TTN. When the device next connects to TTN, this message will be delivered and the device can act upon it… in our case, change the colour of its LED. It would be good form to monitor the WebResponse from TTN to make sure all is well but we ignore it at the moment.

Conclusion

We like Azure Functions because they are powerful, flexible and scalable and it’s nice to be able to wire-up TTN so that the process is seamless. We’ve not yet deployed this into a live IoT project but we’ve no doubt that it will run successfully and give you very little trouble.

Author

Jason Timmins – Technical Director – MBM Ltd – jason@mbmltd.co.uk

Our Office Hours

contact

Do you have any questions?