Too Much Data – You will need help with The Internet of Things

According to the GSMA, a worldwide association of mobile operators and related companies, there are 9 billion connected devices in the world today. By 2020, there will be 24 billion and over half of them will be non-mobile devices such as household appliances. The GSMA estimates that connected devices will be a US$1.2 trillion market by 2020. So marketers and publishers better get ready for this new world too.

Get Ready For a World of Connected Devices

How much information do you need from your refrigerator? Honestly, you only care if its not working. Early warning that it will soon not be working would be idea. How much data will your refrigerator being sending you? For most connected devices, they send status updates every few minutes or even seconds. 24 / 7.

What you will need is a program to collect the information and notify you only if something is going wrong. That program could exist on the frig but what happens if the frig loses power? How will you get notified then?

In order for the Internet of Things to not become the 21st century version of the clock blinking on the front of the VCR player, you will need an app to monitor your personal Internet of Things. It should live in the cloud and it should have simple rules for what events you should be bothered with. If This Then That is a good start but it will need to be less like programming in order for regular folks to use it. I will want to tell an app “Let me know when my frig is acting up” and I dont want to explain to the app what “acting up” means for a refrigerator.

Posted in Uncategorized | Leave a comment

How Unix got its name

Lots of chewy goodness in this IEEE article on the history of Unix. One of my favorite items is how Unix got its name. Ken Thompson and Dennis Ritchie had been on the Bell Labs team that was working along with GE and MIT teams to develop a on-line time sharing operating system for GE’s mainframes called Multics, short for “Multiplexed Information and Computing Service”. Bell Labs pulled out of the project leaving Thompson, Ritchie, and the rest of team suffering from on-line withdrawal as they had to shift back to old style batch processing.

Wanting to recreate the much better programming experience of time sharing, Thompson began tinkering:

Thompson had passed some of his time after the demise of Multics writing a computer game called Space Travel, which simulated all the major bodies in the solar system along with a spaceship that could fly around them. Written for the GE-645, Space Travel was clunky to play—and expensive: roughly US $75 a game for the CPU time. Hunting around, Thompson came across a dusty PDP-7, a minicomputer built by Digital Equipment Corp. that some of his Bell Labs colleagues had purchased earlier for a circuit-analysis project. Thompson rewrote Space Travel to run on it.

And with that little programming exercise, a second door cracked ajar. It was to swing wide open during the summer of 1969 when Thompson’s wife, Bonnie, spent a month visiting his parents to show off their newborn son. Thompson took advantage of his temporary bachelor existence to write a good chunk of what would become the Unix operating system for the discarded PDP‑7. The name Unix stems from a joke one of Thompson’s colleagues made: Because the new operating system supported only one user (Thompson), he saw it as an emasculated version of Multics and dubbed it “Un-multiplexed Information and Computing Service,” or Unics. The name later morphed into Unix.

Posted in software | Leave a comment

Disruption and Generational Cohorts

Two unrelated posts have me thinking how generational cohorts impact the Innovators Dilemma. The first one is the Jay Greene posts on how Microsoft killed the Courier. Unable to decide if Microsoft should disrupt itself with a new non-standard Windows version of the tablet or wait until the Windows group could finish making the main Win OS tablet ready. Balmer calls in Bill Gates to help him choose. (The fact that Balmer could not make this choice himself and had to run to Bill for a decision is all the evidence you need that Balmer has no business being the CEO of a company like Microsoft). Gates seemed concerned about what he felt was a critical flaw in the Courier teams approach to the product:

Courier users wouldn’t want or need a feature-rich e-mail application such as Microsoft’s Outlook that lets them switch to conversation views in their inbox or support offline e-mail reading and writing. The key to Courier, Allard’s team argued, was its focus on content creation. Courier was for the creative set, a gadget on which architects might begin to sketch building plans, or writers might begin to draft documents.

“This is where Bill had an allergic reaction,” said one Courier worker who talked with an attendee of the meeting. As is his style in product reviews, Gates pressed Allard, challenging the logic of the approach.

It’s not hard to understand Gates’ response. Microsoft makes billions of dollars every year on its Exchange e-mail server software and its Outlook e-mail application. While heated debates are common in Microsoft’s development process, Gates’ concerns didn’t bode well for Courier. He conveyed his opinions to Ballmer, who was gathering data from others at the company as well.

I actually dont think Gates reaction was because he was concerned about Exchange email server revenue. Although I dont know Gates personally, I am pretty sure he understands the Innovators Dilemma as well as any executive and as such understands that for Microsoft to maintain relevancy, it will need to release new products that undermine its existing products. I think it is instructive that Gates was hung up on email. I think he saw that as a crippling the acceptance of the device in companies (and this was to be more a workers tablet, not just a consumers tablet like the iPad). And that brings me to the second post, on generational cohorts.

At there is a post on cohort replacement and institutional change. It argues that change occurs not because people change their views or work habits but because different generations have different views and work habits and as one generation passes and is replaced by a newer generation, the way things are done changes. And this I think is the key to Gates failure on the Courier.

For Gates generation, a heavy email client was the critical nexus on the work computer. It tied it all together. Email was used for quick messaging, for calendaring meetings, for organizing project workflow in the production of project artifacts, as a repository for those project artifacts and finally, as the means and record of communication outside the organization or company. For Gates, for a device to not have a heavy email client means you can not use the device to work. And if the Courier team could not understand that, then they did not understand the product they were trying to build.

The problem is that for the younger generation of workers, heavy email clients are not needed. Part of this is do to a fundamental change in the way technical teams work. In Gates day, and still to this day on the Microsoft Redmond campus, programmers worked in offices alone or with one other person. In process work was done in long email threads, meetings were scheduled and held for critical decisions and large project spec documents were approved and signed off on. In today’s lean environment (and the Courier team worked in a lean environment in Pioneer Square, not on the Redmond campus), teams work in large open rooms, workflow is managed face to face, documentation is minimized, meetings are impromptu.

For the newer generation of workers, social media and texting has replaced the quick messaging feature of email. Project workflow is organized on wikis or CRM’s or even walls with sticky notes. Project artifacts are minimized and stored on wikis. Impromptu meetings require much less scheduling. The one aspect of the old email system still used is external communication. However, if that is all your are using email for, you dont need a heavy client, a web client will work fine.

Gates got it wrong, the lack of a heavy email client would not have crippled the Courier as a work machine. It may have limited its acceptance to the younger worker force and more agile work places but that is what innovative products do. They change the way the new people work and as the new people become the majority, they change the way business is done.

Posted in disruption | Leave a comment

Best Tutorial Ever

I recently finished the Ruby on Rails Tutorial: Learn Rails by Example by Michael Hartl. It is the best tutorial I have ever read. By the end of the basic ‘Hello World’ part of the tutorial, I had installed Rails, Ruby, and Git. Checked my code into my local Git repository, pushed my repository into the cloud at GitHub and deployed ‘Hello World’ into the cloud at Heroku. Next I set up a TDD environment and began running automatic unit tests. Then I built a basic version of Twitter.

Well written, interesting project and lots of help on-line. Just add ‘rails tutorial’ to your search term, there is a high probability whatever problem you have has already been posted and fixed.

If you are want to learn Rails start here. If you want to learn Ruby, start here. If you want to get Git installed and working with GitHub, start with Chapter 1. If you want to get Autotest running start with Chapter 3.

So much goodness.

Posted in cloud, rails | Leave a comment

Venture Capital insanity explained, one year later

Dan Shapiro’s post on VC economic insanity is a year old this month. If you have not read it, please do so. If you have read it, time for a refresher.

The source of VC insanity is their own fund structure. Dan uses a pretty standard VC structure to explain this:

1) 10 year fund providing 9% a year,
2) VC’s get 2% a year for management fees.
3) 20% of the profit for any deal goes straight to the VC’s, not the fund.
4) No recycling of profit (this was new to me)

Dan does the math to explain what this means:

A little math: to get 9% per year, a hypothetical $100mm investment must increase to 100*e^(10*9%)=$246mm. But $20mm of the principal (2%/year) goes to management fees and can’t be invested. And the VCs get 20% of the profits (the carry). So actually, $80mm invested needs to yield $290mm, a 3.6x return.

… When you hear that VCs aim for a 10x return, it’s not greed – it’s because if a third of their companies fail and a third just barely get them their money back, a 10x return on the winners puts them in the same place as the S&P 500!

To me, the no recycling of profit leads to much of the crazy and even evil behavior you see in some funds. No recycling means that once a fund sells a company in its portfolio, the profit from that sell sits in the fund, it can not be re-invested in other opportunities. This means that the VC’s time value for all investments is 10 years regardless of when they actually get paid. Combined with the fact that most VC’s will reserve funds for future rounds in the same company and you get the insidious result that VC’s may block an early sale of a company even if the price offered would return 10 times their current investment in the company. That is crazy but the evil part comes in because the blocking of the sale means the company will have to raise additional rounds, perhaps under duress, especially if the VC’s have pushed the company to spend money faster than they should.

Dan lays out the scenario:

Consider this: HyP invests $2mm in YouCo at a premoney valuation of $2mm, meaning they own 50% of a company worth $4mm. Someone offers $40mm for the company. Hallelujah! A 10x win! You each get $20mm!

Not so fast. If they invested $2mm and reserved $5mm for a follow on investment, it’s probably too late to invest the other $5mm. They’re actually getting just shy of a 3x investment on their allocated capital – not even the 3.5x they need to approach the S&P 500. No deal. If they let you sell the company and pocket that cool $20mm, they would actually be coming out behind. The simple economic calculation is to block the sale, and force the company to take additional investment. Consider: if the second round is under duress, best case it’s a flat round: that means $5mm on $4mm premoney, and voila! They own 78% of the company.

Posted in Venture Crapital | Leave a comment

Running Ubuntu 11.04 on a Windows 7 Machine with VMware Player

All of our work at Astonish Inc is using open source software running on some flavor of Linux. My dev laptop is running Ubuntu 11.04 and I could not be happier with the experience. However on the buisness side we still have a need to run some Windows apps and there are some things that just work better on Windows (like connecting to our ioSafe). The cranky old XP laptop we had been using for all our Windows work was becoming too unreliable and we needed to replace it. But I did not want to buy a Windows only machine, it would not be used enough to justify the expense. I looked at setting up a dual boot system but in the end we decided on the easier path of running Ubuntu in a VMware player.

We bought a desk top from Costco running Win 7. We choose a desktop because it is cheaper than a laptop, this is a work machine and we wanted to maximize the cost to benefit ratio. We got it fully loaded (i7-2600 processor, 16 gb ram, 2 tb disk space) because we wanted plenty of power for the VMware image plus we dont buy desktops that often and we will be using it for a long time. We followed the instructions at HowToGeek and the installation was easy. The hardest part was getting the VMware player from VMware. Their site appears to not like Chrome.

So far the installation has worked great with the only issue being sound working while on Ubuntu. Performance is great. And unlike a dual-boot setup, it is very easy to move between Windows and Ubuntu, although in our shop, the user stays in Ubuntu almost all the time. Thumbs up all around.

Posted in hardware | Tagged , , | Leave a comment

Mark Suster – Data is the Next Major Layer of the Cloud

Mark Suster talks about the layers of services provided by different companies in the cloud, starting with hardware, then processing and then management. According to Mark, the next big layer is Data. Of course, he is investing in a company that will provide cloud based data service so he may be biased :) This is not data as in bits to be stored but a data service layer that will do for cloud apps what Oracle and MS SQL currently does for hosted apps. In other words, this fills the same niche in the cloud that proprietary databases do in traditional configurations.

I do like his narrative on how the cloud has affected startup costs. According to Mark, basic infrastructure (servers, routers, license for proprietary software, bandwidth, rack space) costs round 10% of a startups budget. In 1999 when a startup had to pay for all of that, the costs typically would run around $500,000 meaning a startup needed to raise $5 million to, well, start up. The rule of thumb for investors is that they look for a 10x return on their investment. That means that they will only put $5 million in a company if they believe that they can get $50 million out of it.

By 2005 open source software had evolved far enough that you could dramatically reduce costs by eliminating most proprietary software license and cloud based data storage was such you could start to reduce some of your hardware needs. The result was that the same company that in 1999 required $500,000 in infrastructure costs now could start with around $50,000. Which means a total startup budget of $500,000 which means a exit target of a $5 million company. Sizes of companies follow a power law distribution which means that there are something like 5 times the numbers of businesses that are 1/2 the size of you business. Or whatever the number is, the important point is that halving the size of the biz more than doubles the number of bizs that exist. These are all very rough rules of thumbs, also known as SWAGS (scientific wild ass guesses) but according to census data, for every $50 million company there is something like 90 $5 million companies.

Think about that. Your return is the same, 10x your initial investment but the chance of you achieving your return is 90 times more likely.

Today with storage, processing and management in the cloud combined with mature open source software, that same company can now be started with $5,000 in infrastructural costs for a total startup budget of $50,000. Which means exit target of $500,000. It turns out the power distribution of company size means (SWAG alert) that there are around 2x as many $500,000 biz as $5 million companies and 180 times as many $500,000 as $50 million.

Same return (10x) but 180 times more likely to be successful as in 1999.

There are some scale issues. $4.5 million in start up costs ($5 million minus infrastructure costs) translates into something like 15 $100,000 (salary + benefits) jobs for 3 years. $450,000 means 1.5 jobs for 3 years, $50,000 is 0 jobs for any number of years. Clearly you need to make money faster and get to paying salaries much faster with a lower initial capital. That is a real challenge for new startups in the cloud. No longer is it go big or go home with your one shot. Now it is get the site out and get sustainable income as fast as possible and then get on to the next one, as fast as possible.

The first scenario sounds like a lotto ticket to me. The second one, more like…..capitalism.

Posted in cloud | Leave a comment