Last week (Jan 17-18) I had an opportunity to spend two days listening to the likes of Microsoft, Google, Facebook, Twitter, NVidia, and others discuss advancements in the realm of AI. Topics ranged from the easy to understand to quite deep discussions on the math behind image recognition systems. To say that some of the current and coming technology was mind blowing doesn’t do it justice. I’m going to do my best here to relay a few of the topics I thought interesting and my general thoughts on this conference and ones like it.

The first speaker that really captivated my attention was Steve Guggenheimer of Microsoft (@stevenguggs). Steven touched on many points around massive scale, mixed reality, and cloud of course. With the availability of many of these tools at our fingertips, we can assume we might start seeing them in more and more verticals that touch our everyday lives. To that end it was important to lay some ethical groundwork. Steven shared 6 rules of AI ethics during his talk:

  1. AI must maximize efficiencies without destroying the dignity of people
  2. AI must guard against bias. (I feel a future post on bias is required)
  3. AI needs accountability so humans can undo unintended harm
  4. AI must be transparent
  5. AI must be designed for intelligent privacy
  6. AI must be designed to assist humanity

The key to these points and the tools mentioned earlier was the fact that so many of the building blocks required to get started are already available to those willing to dive in. This was echoed in a later session by a data scientist from Microsoft. Both of whom dug deep into the Microsoft AI platform, which can provide the infrastructure, services and tools needed to get a quick start for someone new to the game or for someone with significant experience to dig deep into available services and build their own tools. I was fascinated by their approach, as it didn’t rely solely on Azure based solutions as you might expect. For those developing their own on-premises solutions, there was still quite a menu of choice, allowing for a code first mentality. However, if you’re just digging into this area, they did have a consumption model with many cognitive services available. Within that toolkit there were tools such as bots, agents (cortana for example), and object identification tools. For more on this topic you can visit The technologies they’ve been able to demonstrate and build with these tools sets was astounding. If you get a moment check out holoportation and Emma.

Google’s approach was both similar to Microsoft’s but at the same time differentiated with the ease of consumption. Clearly Google has been at this for a while, what with TensorFlow having been available for a while now. The difference, at least to me, was the focus on making very consumable services available so easily ( Take a minute and walk through these services. To quote a famous virtual geek, this tech is face melting. For more take a look at their AutoML program.

Clearly both Microsoft and Google, with these tools, are shooting for the moon, and making all these of these tools easy to consume for the masses yet applicable to problems to make quality of life better for us all.

Another interesting topic, and one very applicable to all of us technologists was a presentation by Nick Acosta (@pubchimps) of IBM. Nick, addressed an interesting problem, or at least an interesting use case of detecting the programming language of code without a deep inspection. As a working example he used his own GitHub repo, where this code is available for your own demonstration purposes by the way. Nick was simply looking for a count of certain kinds of words in certain frequencies and in particular data sets. Doing so allowed him to make an assertion of the programming language. You can check out his demo code here

This was a great conference, and I highly encourage you to attend their next conference in San Francisco, April 10-13. I’ve into the ML topic for a while, but the sessions I was able to attend here really opened my eyes both to capabilities and availability of tools to advance the use of AI in our everyday lives.