TECHintersection-IoT Speaker Profile: Zach Supalla,

The TECHintersection event is coming up on September 14th through the 17th. I have been helping with the planning, specifically around the Internet of Things (IoT) content. As we lead up to the event I want to familiarize you with some of the great speakers we will have. This is going to be a great event and you definitely don’t want to miss it (you can register here).

Zach Supalla, Co-Founder & CEO,


I met Zach Supalla at CES in January – my good friend Stephen Forte invited me to breakfast with Zach and Dan Jamieson, mostly just for social time (for the record this is one of the things I love about Forte – he is constantly connecting the people he likes with no agenda of his own). Over breakfast Zach and Dan told me about Particle (called Spark back then) and how they had built up their company, participated in the HAXLR8R hardware accelerator, got funded and were growing like crazy. They gave me one of their Spark Core devices and in exchange I bought them some eggs.

After playing with the Spark Core a bit, and getting some of my friends hooked on it, I was sold. I love Particle – they make great, inventive products. I am really glad that we (Microsoft) are partnering with Particle the way we are (I smell some IoT Workshops with the Particle Photon in the future).

Here is the summary from Zach’s LinkedIn profile:

Zach Supalla is an entrepreneur, a Maker, and a designer. He is the founder and CEO of Spark (ed. note: now called Particle), a start-up that’s making it easier to build internet-connected hardware. Zach juggles hardware design, front-end software development, and leading his team through the trials and tribulations of a hardware start-up.

Zach is a graduate of HAXLR8R, the only incubator for hardware start-ups that will teach you to order bubble tea in perfect Mandarin. He also has an MBA from Kellogg School of Management and an MEM (Masters in Engineering Management) from McCormick School of Engineering at Northwestern. Before Spark, Zach worked as a management consultant with McKinsey & Company, advising Fortune 500 companies on strategy, operations, and product development.

Zach will be presenting two sessions at TECHintersection:

Ever wanted a button in your home or office that would call an Uber for you? Or text your wife to let her know that you’re coming home late for work? Or order you a pizza? Sit down with the team from Particle to build your own “Internet Button” that can be hooked up to any API to take action when the button is pressed.

While the world is being filled with great prototyping tools like Arduino, Raspberry Pi, and Particle’s Photon and Core, the road from a prototype to a product manufactured at scale is perilous, and while there are plenty of resources to help, they can be harder to find. Join Zach Supalla from Particle as he talks through the process his company went through to turn the Core from a prototype to a mass manufactured product and explains how others can follow the same steps.

Of course you can find Zach and Particle on social media:

Twitter: Particle | Zach

Facebook: Particle

Another IoT Workshop Scheduled

If you saw this post you know that the last IoT Workshop my team and I hosted was a blast. We are getting ready to add some new labs to in preparation for another workshop event on Monday, September 14th. If you missed the last one, you should definitely come to this one – and it’s part of a great new event (disclaimer – I have been helping with the IoT content planning).

The new event is TECHintersection and it is being put together by the founders of DEVintersection, DotNetRocks, and one of the masterminds behind large scale events including Microsoft TechEd, //build and AWS re:Invent. The event has a shared focus on three topic areas that are tightly intertwined – the Internet of Things, Architecture and Security. Its looking like it is going to be a great few days spent in the city that gave birth to TED Talks – Monterey, CA.

For the IoT Workshop (we are calling this one IoT Firestarter) we are planning a full day, with lots of hands-on time. We will likely be using a new board, which will result in an entirely new set of labs. In this workshop we will go from building your first hardware project to ingesting telemetry into Azure and creating data visualizations with PowerBI (of course we will still do some command/control stuff – and maybe even bring a couple robots…who knows).

Throughout the conference there will be great sessions, including technical sessions from the Windows IoT and Azure IoT product teams, a few sessions from my team on how we have build IoT solutions in the real-world (including connected cars, constructions site and more), and a couple companies who have built IoT start-ups will be there to share their story, including Josh Supalla from and a keynote by Adam Benzion, the co-founder of You can get the full session list here.

I hope to see you there!

IoT Labs @

Over the past few week my team and I have been working on a set of self-paced labs for building Internet connected Things using Arduino. Last night we got together with 150 of our closest DevIntersection/anglebrackets friends and had a great hack event. Lots of developers got their first exposure to Arduino and to building connected Things. For lots of people it was their first time build hardware instead of just software.

All of the labs are published at and lots more labs will be coming soon.

Thanks to all that attended last night – we had a blast.

image image image image

Arduino: Reading Analog Voltage

In this lesson you will use two resistors – a static resistor and a variable resistor – to create a voltage divider that enables you to effectively understand the intensity of light detected by the photoresistor – essentially a light meter. In the previous lesson you learned how to send OUTPUT and in this lesson you will learn to collect INPUT.

See the lesson at

Configuring the uBlox 6M GPS w/ Compass for a Multirotor (APM 2.6)

Recently I was rebuilding my multi-rotor (x-type quad) and switching from the CC3D flight controller (which I love) to the APM 2.6 (ReadyToFlyer from Ready to Flyer Quads) flight controller. I wanted to add GPS and compass to enable fully autonomous flight, so I added the uBlox 6M GPS W/ Mounting backplane and compass module. Once I installed the GPS module and tried to configure it, I realized there was a lot I didn’t know, and very little clear information on how to set this unit up. Let me help the next guy with what I learned.

Installing the Board

The first problem I had was the actual installation of the board. There are no markings on the board that indicate which way is supposed to be mounted facing up and which side should be facing forward, so I guessed.


I mounted the board with the connectors facing upward (seemed logical – this way I had easy access to them), and the circular component toward the front-left. THIS IS WRONG!

This is not the correct mounting configuration. Of course it took me a lot of trial and error and digging through forums until I discovered the correct mounting configuration. I had m mounted the board upside down.

The correct mounting is with the text that reads “Ublox GPS Module V2.0” facing up and to the front of the multi-rotor. The small black component is the compass and should be oriented to the front-right (this puts the green LED to the front-left).



This is the CORRECT orientation. Make sure you connect the wiring harness before you mount the board. Once it is mounted it will be hard to access the connector.

Configuration of the GPS/Compass is Mission Planner

In the Mission Planner configuration (Initial Setup) there is a step for configuring the compass. Excluding “Manual” there are two options that target the APM flight controller. Although “APM with External Compass” sounds like the correct choice, the correct choice is “APM with OnBoard Compass”. This option sets the Compass Orientation to ROTATION_NONE. This means that the compass is upright and facing forward (the “APM with External Compass” option is for the 3DR GPS/Compass which is mounted upside done, so it sets the orientation to ROTATION_ROLL_180).


A Note On Radio Configuration

While trying to figure out the Compass Orientation I was thrown off by a Radio Configuration that I was unaware of. Every time I did a test flight, the quad copter would pitch backward whenever I pushed the pitch forward. I mistakenly thought this was somehow due to the compass orientation when in fact it was due to the pitch being inverted in Mission Planner.


In the Radio Configuration section, when I pushed the pitch up, the Pitch green bar would go up,. and when I pushed pitch down the Pitch green bar would go down. THIS IS WRONG. In Mission Planner the pitch needs to be inverted (so when you push pitch up the Pitch green bar goes down). To solve this I when into the configuration settings in my Turnigy 9X transmitter and reversed the elevator settings (FUNC SETTINGS | REVERSE | ELE).

IMG_3765 (2)

Golf and the DevOps 3RA

Last week, while on vacation in San Diego, I took a golf lesson from a veteran golf pro, Bob Madsen. For the record, I am not a good golfer. While I have swung a club at a white ball for many years, I have failed to improve during this time due to lack of dedicating myself to improvement (I have only taken a few lessons, and I don’t invest time in practicing regularly). In short, I am a hacker. I know how to golf, but I suck at the execution of the golf process.

During my lesson Bob and I were talking about improvement in golf, and what he was describing to me sounded an awful lot like how I think about improvement in delivering software, which leads me to believe that how to improve in DevOps and agility is not markedly different than how to improve in any skill, like golf. It begins with recognizing the barriers that are preventing or blocking improvement.

Barriers to Improvement

During my golf lesson Bob was describing to me the typical barriers that golfers hit as they attempt to improve and reduce their score from the in 100’s all the way to the low 70’s (also known as ‘scratch’). He drew a diagram that looked something like this.


With this diagram Bob described these barriers that a golfer faces as they attempt to lower their score from the 100’s to the 90’s to the 80’s to the 70’s. The challenges they face at each barrier are different, and get more difficult to overcome at each stage (we spent our time talking about the 100’s to 90’s barriers if that tells you anything). Each barrier requires both different skills and overall improvement of existing skills in order to overcome it. In other words, “what got you here, won’t get you there.” Interestingly, regardless of the level of skill, golfers use basically the same tools throughout, although they learn how to use them better (foreshadowing of another analogy…perhaps).

3RA – DevOps Improvement

I started aligning this with how I think about software delivery and DevOps improvement. Like golfers trying to improve their game, organizations face different barriers at each stage as they attempt to improve in delivering software. I assert that there are four stages on a continuum of improvement that organizations spend time in as they build the skills to overcome the next barrier to their improvement. With that said, I don’t think an organization is in one stage one day, and then clicks over into the next stage. I think it is a slow penetration through the barrier into the next stage. Like my golf game, I won’t suddenly have consistent scores in the next lower score bracket (or stage) from one day to the next. Instead, I will see some success and some failure, hopefully improving the frequency of success until I am consistently performing in the next better stage.

Any skills improvement (whether it is my golf game or an organization’s approach to delivering software) advances progressively through a series of stages as the person or organization overcomes the barriers blocking them from improving. I call this the 3RA stages (pronounced ‘Era’)—Reactionary, Repeatable, Reliable, and Aspirational.




As the name implies, this state is demonstrated by a reactionary approach to the skill. The behaviors of the individual or organization are typically ad hoc and success is achieved mostly through luck. This state is often correlated with significant inter-team conflict and finger-pointing (or in the case of golf, lots of swearing and club throwing). For the record, this is where I am at in my golf skills. Like I said, I have been playing golf for years, but time alone doesn’t yield improvement. In fact, if you believe in the adage “Practice Makes Permanent” then the time I spent doing it the wrong way only makes improvement more difficult. In my case, I have never put in the time and effort to improve my skills enough to achieve anything that would resemble success. I simply keep doing the same dumb stuff I have always done and wonder why I’m not getting better. I have never scored below 100 because I continue to make the same mistakes in how I execute everything from the golf grip and swing to how I think about (or fail to think about) course management. In my case, I have the knowledge (I can describe the grip and swing), but I haven’t learned to successfully execute what I know repeatedly. I have met many organizations that deliver software like this—they may know what successful software delivery looks like, but they don’t know how to execute it and don’t put in the work to improve.


By deciding to focus on improvement though training and regular and consistent practice we can increase the frequency of success slowly and achieve a state of repeatability. For most this is a culture change that requires support at all levels (I better get my wife to support me in spending more time on golf). In this state individuals or organizations start to see some success based on the application of the skills they are developing, and not just as the result of good fortune. They aren’t perfect yet, but at least they can perform the same thing over again, such as swinging the golf club correctly most of the time, releasing software repeatedly without having to invent the release process each time, or implementing a change management process so that changes are handled the same way each time they arise. While the individual or organization can repeat these behaviors based on the skills they have developed, they fail to repeat the behaviors with the necessary regularity and still have more failures and fire drills than they would like. In the golf analogy this is my next goal—bogey golf. I am not trying to become a scratch golfer next—that would be an unreasonable expectation—I just want to lower my typical score into the 90s (my goal for this summer is to get a score in the 90s). In my case, I have begun the culture change (I have stakeholder agreement, now I have to commit to investing the time), I have the tools (TaylorMade clubs and balls, Adidas shoes, Nike clothes, etc.), and I have the core knowledge (I know how to swing the club), and now I must build my skill in execution through regular and consistent practice.


As individuals or organizations continue to improve they next achieve a state of consistency where they are beginning to master the repeatable behaviors they have learned and they are executing them with the needed regularity. In other words, they are becoming reliable in their execution of their skills. When and if I ever achieve this state in golf will depend on the level of dedication I exert in building my skill. In this state I would swing the club and hit the ball reliably, making few mistakes in the execution (although occasional mistakes should still be tolerated), and would focus more and more on course management and good decision making to minimize risk. I would expect to par at least ten out of 18 holes, and bogey the rest. In other words, I would expect success most of the time, and minor issues some of the time. For organizations that achieve this state, the frequency of issues has decreased and the velocity at which they are able to deliver software is increasing. They are getting better at using data in their decision making and are therefore delivering products that their users have a higher level of satisfaction with. While they are consistent, there is still room to improve and deliver software faster and at a higher frequency to improve their business results.


The Aspirational state represents the ideal—a scratch golfer, maybe even a touring professional. This is a state that most of us mere mortals will likely never achieve, but some do and they can show us how it is done. For those few who get to this state, they make it look easy because they have invested the time and effort to build their skills to such a level of reliability success is nearly a given (although their definition of success has likely evolved to something that is difficult for us normal folk to understand). Regardless of whether we are talking about golf or delivering software, the Aspirational state is one that we may continue to strive for knowing we may never fully achieve it (I doubt I will ever be a touring pro golfer, but I am sure that I will always want to improve). For organizations, the Aspirational state is one where true transparency and collaboration enable them to deliver software to production as often as they want, including multiple times per day if they choose. The Aspirational state is one that only a few organizations will fully achieve (some may not even desire to achieve this state), but this will remain the ideal that is referenced when discussing the value of DevOps.

Great, So What’s Next?

I am only scratching the surface here. In any effort to improve skills, whether in golf or in software delivery, it is important to gauge where you are so you can identify what is next. Just like I shouldn’t focus on trying to get par on every (or any) hole, you shouldn’t try to deliver software faster than is reasonable without building up to it. Knowing where you are starting from is important.

There is  a lot under the surface of the 3RA Framework that I will share with you in some upcoming posts. For now take the time to internalize the four stages—Reactionary, Repeatable, Reliable and Aspirational. Understanding the differences I have described is important in learning how to assess where you are and what you should focus on next.

Scaling Agile Across the Enterprise

Earlier this week we, the Developer Division at Microsoft,release a series of short videos telling the story of how we made the transformation from our old waterfallian ways to a scaled agile way of working.

Here is where you can find all of the videos.

This is the story of a division of 3,000+ engineers who had been following a well defined waterfall approach for years, even as we were advocating Agile and all of its virtues. In truth, we weren’t the hypocrites that statement indicts us as. In fact, we had small agile teams popping up all throughout the division for years. In true agile fashion, the momentum grew as more and more small teams saw the value in Agile. We had many small agile teams working within a waterfall framework. As the momentum grew we decided to make a change.

I love the was Soma (S. Somasegar, our Corporate VP) puts it when he talks about how he made a decision about Agile. He says the decision he made was not to implement Agile, it was top not stop teams from trying Agile.


As our business needs changed and the need for our release velocity to increase, Agile gained more momentum, until a few years ago when the momentum had grown enough that we decided to formalize Agile across the division. That meant getting the entire division onto the same sprint schedule (we have 3-weeks sprints, and all teams start and stop sprints on the same day). Over time we moved into a new building that was designed with Agile teams in mind (team rooms, focus rooms, etc.).


Today we are on sprint #66 and we have had countless releases since we began this transformation, including releases of Visual Studio Online every three-weeks, and internal releases of the Visual Studio clients, to several public releases of Visual Studio (2012, 2013) and related updates.


I encourage you to check out the video series (rather than have me retell it here). There is some great insight shared by my peers and me, and lots of footage of how we work, where we work, and what we do.

Here are links to the individual videos in the series.


Everything we do around development — collaboration, testing, and customer feedback — has changed.


Waterfall vs. Agile

Today, we think about how fast we can translate an idea into reality, and get it into customers’ hands.


Visual Studio Transition

We needed to work more incrementally, deliver to customers faster, and let feedback make the product better.


The Agile Shift

We didn’t decide we were going to be agile starting tomorrow. There was gradual buy-in with teams and leadership.


Physical Transformation

We moved out of our individual offices, and put teams together into the same room.


The New Normal

We need to make sure we’re hearing our customers, and learning from them as we’re building the software.


Employee Response

We’ll talk about what’s working and what’s not working… the idea is continuous improvement.


Measuring Success

We need to know that what we’re building scales for software teams around the world.



Your software development efforts have to aid your business. That’s why they are there.


Knightmare: A DevOps Cautionary Tale

I was speaking at a conference last year on the topics of DevOps, Configuration as Code, and Continuous Delivery and used the following story to demonstrate the importance making deployments fully automated and repeatable as part of a DevOps/Continuous Delivery initiative. Since that conference I have been asked by several people to share the story through my blog. This story is true – this really happened. This is my telling of the story based on what I have read (I was not involved in this).

This is the story of how a company with nearly $400 million in assets went bankrupt in 45-minutes because of a failed deployment.


Knight Capital Group is an American global financial services firm engaging in market making, electronic execution, and institutional sales and trading. In 2012 Knight was the largest trader in US equities with market share of around 17% on each the NYSE and NASDAQ. Knight’s Electronic Trading Group (ETG) managed an average daily trading volume of more than 3.3 billion trades daily, trading over 21 billion dollars…daily. That’s no joke!

On July 31, 2012 Knight had approximately $365 million in cash and equivalents.

The NYSE was planning to launch a new Retail Liquidity Program (a program meant to provide improved pricing to retail investors through retail brokers, like Knight) on August 1, 2012. In preparation for this event Knight updated their automated, high-speed, algorithmic router that send orders into the market for execution known as SMARS. One of the core functions of SMARS is to receive orders from other components of Knights trading platform (“parent” orders) and then send one or more “child” orders out for execution. In other words, SMARS would receive large orders from the trading platform and break them up into multiple smaller orders in order to find a buyer/seller match for the volume of shares. The larger the parent order, the more child orders would be generated.

The update to SMARS was intended to replace old, unused code referred to as “Power Peg” – functionality that Knight hadn’t used in 8-years (why code that had been dead for 8-years was still present in the code base is a mystery, but that’s not the point). The code that that was updated repurposed an old flag that was used to activate the Power Peg functionality. The code was thoroughly tested and proven to work correctly and reliably. What could possibly go wrong?

What Could Possibly Go Wrong? Indeed!

Between July 27, 2012 and July 31, 2012 Knight manually deployed the new software to a limited number of servers per day – eight (8) servers in all. This is what the SEC filing says about the manual deployment process (BTW – if there is an SEC filing about your deployment something may have gone terribly wrong).

“During the deployment of the new code, however, one of Knight’s technicians did not copy the new code to one of the eight SMARS computer servers. Knight did not have a second technician review this deployment and no one at Knight realized that the Power Peg code had not been removed from the eighth server, nor the new RLP code added. Knight had no written procedures that required such a review.
SEC Filing | Release No. 70694 | October 16, 2013

At 9:30 AM Eastern Time on August 1, 2012 the markets opened and Knight began processing orders from broker-dealers on behalf of their customers for the new Retail Liquidity Program. The seven (7) servers that had the correct SMARS deployment began processing these orders correctly. Orders sent to the eighth server triggered the supposable repurposed flag and brought back from the dead the old Power Peg code.

Attack of the Killer Code Zombies

Its important to understand what the “dead” Power Peg code was meant to do. This functionality was meant to count the shares bought/sold against a parent order as child orders were executed. Power Peg would instruct the the system to stop routing child orders once the parent order was fulfilled. Basically, Power Peg would keep track of the child orders and stop them once the parent order was completed. In 2005 Knight moved this cumulative tracking functionality to an earlier stage in the code execution (thus removing the count tracking from the Power Peg functionality).

When the Power Peg flag on the eighth server was activated the Power Peg functionality began routing child orders for execution, but wasn’t tracking the amount of shares against the parent order – somewhat like an endless loop.

45 Minutes of Hell

Imagine what would happen if you had a system capable of sending automated, high-speed orders into the market without any tracking to see if enough orders had been executed. Yes, it was that bad.

When the market opened at 9:30 AM people quickly knew something was wrong. By 9:31 AM it was evident to many people on Wall Street that something serious was happening. The market was being flooded with orders out of the ordinary for regular trading volumes on certain stocks. By 9:32 AM many people on Wall Street were wondering why it hadn’t stopped. This was an eternity in high-speed trading terms. Why hadn’t someone hit the kill-switch on whatever system was doing this? As it turns out there was no kill switch. During the first 45-minutes of trading Knight’s executions constituted more than 50% of the trading volume, driving certain stocks up over 10% of their value. As a result other stocks decreased in value in response to the erroneous trades.

To make things worse, Knight’s system began sending automated email messages earlier in the day – as early as 8:01 AM (when SMARS had processed orders eligible for pre-market trading). The email messages references SMARS and identified an error as “Power Peg disabled.” Between 8:01 AM and 9:30 AM there were 97 of these emails sent to Knight personnel. Of course these emails were not designed as system alerts and therefore no one looked at them right away. Oops.

During the 45-minutes of Hell that Knight experienced they attempted several counter measures to try and stop the erroneous trades. There was no kill-switch (and no documented procedures for how to react) so they were left trying to diagnose the issue in a live trading environment where 8 million shares were being traded every minute . Since they were unable to determine what was causing the erroneous orders they reacted by uninstalling the new code from the servers it was deployed to correctly. In other words, they removed the working code and left the broken code. This only amplified the issues causing additional parent orders to activate the Power Peg code on all servers, not just the one that wasn’t deployed to correctly. Eventually they were able to stop the system – after 45 minutes of trading.

In the first 45-minutes the market was open the Power Peg code received and processed 212 parent orders. As a result SMARS sent millions of child orders into the market resulting in 4 million transactions against 154 stocks for more than 397 million shares. For you stock market junkies this meant the Knight assumed approximately $3.5 billion net long positions in 80 stocks and $3.15 billion net short positions in 74 stocks. In laymen’s terms, Knight Capital Group realized a $460 million loss in 45-minutes. Remember, Knight only has $365 million in cash and equivalents. In 45-minutes Knight went from being the largest trader in US equities and a major market maker in the NYSE and NASDAQ to bankrupt. They had 48-hours to raise the capital necessary to cover their losses (which they managed to do with a $400 million investment from around a half-dozen investors). Knight Capital Group was eventually acquired by Getco LLC (December 2012) and the merged company is now called KCG Holdings.

A Lesson to Learn

The events of August 1, 2012 should be a lesson to all development and operations teams. It is not enough to build great software and test it; you also have to ensure it is delivered to market correctly so that your customers get the value you are delivering (and so you don’t bankrupt your company). The engineer(s) who deployed SMARS are not solely to blame here – the process Knight had set up was not appropriate for the risk they were exposed to. Additionally their process (or lack thereof) was inherently prone to error. Any time your deployment process relies on humans reading and following instructions you are exposing yourself to risk. Humans make mistakes. The mistakes could be in the instructions, in the interpretation of the instructions, or in the execution of the instructions.

Deployments need to be automated and repeatable and as free from potential human error as possible. Had Knight implemented an automated deployment system – complete with configuration, deployment and test automation – the error that cause the Knightmare would have been avoided.

A couple of the principles for Continuous Delivery apply here (even if you are not implementing a full Continuous Delivery process):

  • Releasing software should be a repeatable, reliable process.
  • Automate as much as is reasonable.