Site Meter

November 30, 2009

The Tablet Business Is Clearly Not Easy Business


We are nearing the end of 2009, and the world of tablet PCs is just as fuzzy as it was at the beginning of this year. Despite continued rumormongering and finger waving about guesses on what Cupertino has planned, Apple's long-anticipated tablet remains unseen and more discussion has been devoted to its reported slips than has been made about its potential specifications. Meanwhile, as you no doubt saw, Michael Arrington of TechCrunch made the abrupt announcement this morning that the CrunchPad project had been put to rest before it was able to debut - the victim of relationships gone awry between client and vendor. While the two examples are significantly different, they both demonstrate how hard it has been for companies big or small to make headway into this space.

Despite seeing only the earliest of prototypes, and not ever seeing the product run myself, I was bullish on the CrunchPad's potential for a few reasons. The first was due to its promised low cost, starting at $299, and later said to be closer to $400. The second was the recognition that computing is moving away from the folders and desktop metaphor, and more to a Web-centric cloud (as discussed with Apple vs. Google), with applications running on Web services instead of CPUs. Third, I wanted the opportunity not to avoid Apple, but to support a new challenger - as I saw a respected peer and occasional acquaintance try to take the leap from creating content to creating hardware.

As I mentioned in a pair of posts in July, I said I was leaning to choose CrunchPad over Apple, and guessed Mike could be as much known for the success of the device as he has been for the success of his blog network, if it were to take off. Now, barring a complete reversal, it looks like not only can't I get my hands on a CrunchPad, but we won't get to see Mike and his team fight the operational and sales challenges common in any hardware firm.

These challenges look to be exacerbated when you add the word "tablet" - either due to the challenging engineering demands on cramming so much utility in a slim technology device, reducing costs, or in finding the right market. Even if I find the concept of a tablet intriguing, I am still wondering where I would put a laptop down, and pick up a tablet, or when I would holster the iPhone and pick the tablet up. It occupies an uncomfortable middle ground which hasn't yet been solved, in mass, by manufacturers keen on penetrating the space. Even Google's Chrome OS, which looks like a clear candidate for such a project, looks to be focused on NetBooks, which are not quite tablets and not quite laptops. Once you divide up the potential market, it's one full of slivers, without much pie.

Unlike others who are calling the CrunchPad vaporware, I am not. I see no good coming from seeing this product never reach the market. Even if it were to have shipped and not found dramatic traction, it would have had a chance, and given customers choice.

Disclose This: I Can't Disclose Everything Everywhere!

Though hubbub around the FTC's plans to require bloggers to disclose relationships with companies, services and products has lessened over the last few weeks, the December 1st date for enactment is rapidly approaching. As promised before, and many other times, I will make any relationships I have that are beyond "typical" clear to you as best as I am able. But, as you can expect, as communication vehicles evolve, even today's best-attempt laws aren't ready for how I am operating. With so much of my downstream activity being automated due to activity elsewhere around the Web, the potential for disclosure sometimes doesn't even come up.

Long-time site visitors know that for the first 3 years of my writing on the blog, I was extremely careful not to talk about my company, its employees, partners, customers or even talk about the industry. In 2008 and this year, I started to have some unpaid advisory roles that included a minor equity share, and told you about that, disclosing where it made sense. And now, with my new role at Paladin Advisors Group (@paladinag), I am growing the client roster, attracting some enterprise companies, and startups who want to work with me more closely.

So that's a good thing, right? I agree. I am excited about the new possibilities as I sign up new clients and advisory roles, because my entanglement in the Web is changing - from an interested observer and consumer to one that can play more of an activist role, shaping the tools we are using now or will use soon. Yet, as I add such entanglements, I am thinking about where I can possibly disclose - not so I am covered by the FTC, but so I am covered with you, because your trust is more important than whatever the FTC dreams up.

(And to be honest, I've given it a lot of thought, and it's probably true that in many cases, advisors to a company, or its VCs, should be trusted a great deal more than a typical consumer, as they actually know the product and company better than just about anyone)

Here's where I perceive gaps from the FTC to my workflow:

1) Archived Posts and Discussions of Current Clients

Very often, my new clients are ones I have written about before. My previously writing on them was no doubt due to my interest, and they came to know me either prior to the story, or because of it. Now that I have a working relationship with a client, for example, with my6sense, should I retroactively go tag all previous posts that mentioned them with a new Disclosure text?

Also - should it be perceived that maybe I covered them in a positive light before, because I was angling for an advisory role? Even if there was no professional role before, should one assume that was the case, or is this coincidence?

2) Liking of 3rd Party Content On Current Clients

I won't even waste my time reading every word of the FTC's script, but other folks, including SiliconAngle's Mark Hopkins have done the dirty work. The obvious places to disclose are, of course, in blog posts, and in Tweets. But what about other people's content, where I can add visibility to their own comments, even if I was not the original author?

For example, should I not "Like" comments by the client or other shares of the client's work that others have made on Facebook and FriendFeed? Is it assumed that I would only "like" a post about a feature release because of the relationship? What about adding their items to Delicious or other networks?

3) Automation Can Prevent Disclosure

If I see a blog post by a client and share it in Google Reader, do I also have to add a note when I share that they are a client? What if I retweet their official Twitter account? Notwithstanding that Twitter won't let you make any additional comments on their new retweet functionality, one almost runs out of characters.

If you also remember that I have a fairly robust social media workflow, much of the way I get data around the Web is by letting other networks do the heavy lifting. Bookmarks I make on Delicious and items I share on Google Reader automatically get tweeted - and I never get a chance to disclose - if there is any relationship. This is essentially what could be construed as a crime of omission, one that is not egregious, but could still be looked at sideways, if one feels that I did not make best efforts to make a relationship clear.

In October, I said the disclosure rules would have little effect, largely because I believe people who skirt the rules today will continue to do so, and while there will be some showcase examples of enforcement, they will be a small percentage of infractions indeed. I still don't think this will be dramatic. But it always seems like the people who try to change the rules are changing them for the way the Web was, not the way the Web is. And if I can easily find loopholes or places to work around the rules, the people who are the real bad guys will walk all over this thing.

November 28, 2009

Keep A Close Eye on Chris Messina for the Web's Future

There are a few people in the Web whose work I can't help but watch with significant interest, as I know they are among the more visible people, working in teams with lesser-known colleagues, focusing their effort on moving the Web forward. From DeWitt Clinton, Brett Slatkin, Brad Fitzpatrick, Chris Saad, Dave Winer and David Recordon, to people like Jason Shellen, Chris Wetherell, Kevin Marks, Leah Culver, Paul Buchheit, Bret Taylor and Chris Messina, to name two handfuls, I can believe that these folks are working on those projects that are shaping the way we communicate and take in information. Messina, in particular, has penned a few blog posts over the last month that have had us thinking quite a bit - and it is safe to say he is on a roll.

Chris, in November alone, has proposed a new microsyntax for Twitter, forecast the death of the URL, and talked about how "designing for the gut" takes advantage of how the new social Web pushes people to overcome phobias and connect with people.

A well known advocate for open source, and one of the voices behind OAuth, which we discussed on Thursday, Messina has a history of thinking beyond where we are today and proposing concrete ideas that can be acted upon immediately. Those hashtags you see everywhere on Twitter these days, in practically every tech event and many trending topics? Chris proposed the idea in August of 2007. So it makes sense that he might have given a ton of thought to more uses for microsyntax, as he describes in his proposals for Twitter, as he suggests new items, including "/by", "/via" and "/cc". These suggestions are very clear and concise, the work of someone who has done his homework.

Chris' thoughts on the reduced focus on the URL too are salient, as we become more used to navigating in our browsers with pre-determined buttons and workflows. For me, URLs are often simply one-time visits before they are thrown into my RSS reader for safe-keeping, or they become bookmarks for later clicking. But the act of typing in a URL character by character seems antiquated. As he says, that's a gut feeling, and not one backed by much science on my part.

As I see it, the social Web changes the entire process of content discovery. Instead of portals, we are relying on mortals. Our trusted friends and experts bring us the best content from around the Web to us directly, via Facebook, Twitter, FriendFeed, and even that old tool... e-mail. We are trusting human filters to select the best from our RSS repositories and hand it off downstream. We pick a handful of trusted favorites, and make them the equivalent of our Yahoo!, or even Google.

Chris' post on the death of URLs paints a not so pretty picture of how, if left uncontrolled, a few powerful companies could help funnel the majority of users to predetermined sites, hand selected by them - much like the fears we once had about dominant portals. This could be done as we graduate from the traditional browser and link model to something else, where Web-connected applications pass us the requested data. He says, "We all know that the internet has won as the transport medium for all data — but the universal interface for interacting with the web? — well, that battle is just now getting underway."

One thing about the Web is that it is ever evolving. The places we choose to communicate are changing. The information we think provides value is changing. Our requirements for how quickly we need the data are changing. Chris and the many folks I mentioned in the first paragraph are among the first line of defense we have, trying to set standards and promote change for a world that feels right from both the gut and from the mind. You can find Chris' writing over at http://factoryjoe.com/blog/ or on Twitter at @chrismessina. While I assume many of you read him religiously, it's time the rest of you did as well.

November 26, 2009

Is There a Looming Battle Over OAuth's Successor?

The OAuth protocol, used on many popular Web sites and applications to pass your credentials between sites without requiring the entry of your user name and password, including Twitter, is potentially under pressure from a team of techies representing Microsoft, Google and Yahoo!, who have introduced a competing specification interpreted as being aimed to succeed OAuth, called Web Resource Authorization Protocol, or WRAP. Eran Hammer-Lahav, the Director of Standards Development at Yahoo!, who helped coordinate many OAuth contributions, and created a formal specification for the initial OAuth standard, recently panned the move, saying, "The road to hell is paved with good intentions," adding his own proposal for OAuth 2.0, which he hopes will better separate between authentication and authorization.

Today's OAuth standard is known to have its imperfections. Hammer-Lahav notes in his 2.0 proposal that OAuth is essentially "unusable" for mobile devices or installed apps, and also suggests that OAuth "does not adequately support large providers". But he says the move to create WRAP has confused developers' focus, and diverted resources, calling it "just one illustration of the demise of the OAuth community".

But his opinion, unsurprisingly, is not universally accepted. David Recordon of Facebook, also on the boards of the OpenID and Open Web Foundations, states in the comments of the post that Facebook is not supporting OAuth 1.0 as it is simply too heavy - requiring a massive increase in HTTP requests, also adding that other developers find OAuth "too difficult to correctly implement".

David followed on to his initial comments with a post to the IETF mailing list, which you can see here: Facebook, OAuth, and WRAP. In the note, he highlights the belief that the proposed WRAP alternative maps well to the company's current authentication process, adding WRAP simplifies the development community's learning curve.

The discussion, which is ongoing, may end up splintering development communities between sticking with the current version of OAuth 1.0, looking at WRAP as an alternative, or trying to support a new OAuth 2.0, as specified by Hammer-Lahav. But if you have wondered why Facebook Connect acts one way and Twitter OAuth acts another way, it's because they are different approaches entirely. If this discussion is any indication, one can expect there to be continued divergence, rather than a single way to deliver user authentication and authority between sites and applications in the future on the Web.

For another viewpoint on this broad topic, see Jesse Stay's post: The Future Has No Log In Button. Also, DeWitt Clinton of Google, on FriendFeed, says the open discussion "is good".

November 25, 2009

BackType Feeds Partners Faster, Thanks to PubSubHubbub

BackType, the most robust and feature-rich comments tracker on the Web, has expanded its services over the last few months to include a number of new items, including the launch of BackTweets, to find shared links on the microblogging service, a TweetCount plugin to highlight the number of times items have been retweeted, and a BackType Connect plugin for WordPress to show related conversations around the Web attached to your blog. The company's API is also used by popular services including Bit.ly and Disqus, who leverage the service to find external links and related tweets. Today, these downstream partners can expect updates to happen in near real-time, thanks to BackType's jumping on the PubSubHubbub bandwagon, in a partnership with Superfeedr, providing practically instantaneous updates downstream.

Superfeedr, one of the more aggressive services promoting PubSubHubbub, is providing the legwork for BackType, acting as a hub, pushing updates automatically as they are discovered. In today's post, BackType says the move reduces load on publishers by sending feed updates to all potential subscribers, without requiring multiple polls, and avoiding the arbitrary nature of clients' periodic requests.

If you subscribe to BackType to follow individuals' comments, or if you simply use Disqus or BackType connect on your blog, the move will ensure updates reach your content faster than they ever have before. In a Web world that is getting increasingly real-time, the name PubSubHubbub keeps popping up, and BackType is staying on the leading edge of innovation.

November 23, 2009

By Thinking Small, Data Robotics' Success Looking Big

2008 and 2009 haven't been particularly kind for many companies. Amidst a cacophony of bailouts, bankruptcies, lowered valuations and layoffs, Data Robotics, a direct attached storage manufacturer based in Santa Clara, has delivered growth exceeding 100 percent in each of the last two years - and hopes to be on track for going public via an IPO some time in the next two years. Combined with a popular product line, which was enhanced with a pair of new models today, you can see they have bucked the trend, surprising many people, including me, with their success.

Initially known as Trusted Data, before changing the company's name in 2007 to better reflect the company's automation capabilities without confusing customers into thinking they were a security company, I have known the company and its founder, CEO Geoff Barrall, for several years, having once worked with him as a colleague, and even working for him directly from 2004-05. (Consider that my disclosure)

When he started Data Robotics in 2005, I wasn't all that keen on yet another small storage array entering the market, even if it was aimed at consumers, and came with nifty features, like a meter that showed disk utilization on its facade.

My skepticism, and that of others, didn't deter Barrall, as he and the company found niches for its Drobo desktop storage arrays, including the creative professional community, and most recently, the federal market, which has become the company's primary vertical, I was told. The company has made a name for itself over the last few years with a distinctive product appearance, a proprietary non-RAID architecture that aims to protect data in the event of disk failures, and the potential ability to upgrade forever, simply through swapping out disks for those in a larger size, thanks to evolutions in disk density, that have seen capacities grow from the hundreds of gigabytes not too long ago to multiple terabytes today.

In the most recent year, Data Robotics accrued approximately $30 million in revenue, double that of the previous year, and quadruple the one prior. With the current quarter looking well, Barrall told a group of storage geeks, as part of Gestalt IT's Tech Field Day, a few weeks ago that doubling revenue again was not out of the question. Having seen profitable months already, the company intends to blast through break-even, and test the public markets when both they and Drobo are ready.


Drobo Teased Us With A Preview Earlier This Month

Today, the company added on to their product line with the new Drobo S and a new iSCSI SAN, the Drobo Elite. While some vendors, such as EMC or NetApp, started at the top of the market and are working their way down, Data Robotics started with the consumer and is working its way up into bigger devices, up to a significant 32 terabytes in its latest gear.

In contrast to my "set it and hope to forget it" Apple Time Capsule, which stays a single configuration for ever and ever, until I get rid of it, the Drobo can be upgraded over time, and doesn't blink at seeing disks of different sizes in the same array. Though it requires a second device, called DroboShare, to provide Network Attached Storage (NAS) functionality, it is quite compelling, especially as I start to increase my creation and archival of rich media storage, as most fathers of twins no doubt do.

The new entrants to the product family aren't necessarily for the low-end consumers like me, who might do just fine with a 4-bay desktop storage device, and don't need iSCSI functionality, but they show that the company is filling any gaps in the market that may prevent it from continuing its doubling of growth. At a time when many companies are shuffling the deck and trying to mute bad news, Data Robotics has been quietly growing.

See additional coverage from this morning around the Web:

my6sense Update Adds Time Filters, Social Enhancements

At the end of last week, my6sense, an iPhone application focused on digital intuition, helping you reduce information overload through focusing on content most relevant to you, introduced version 1.1 of their service to the iTunes application store. While the company mentioned the update as being "full of goodies", the top update that impacts my usage is that of filtering of relevant items by time, ensuring my focus in on items relevant both in content and in recency.


my6sense Offers Time Filters on their Relevancy Stream

my6sense's approach thus far has been to surface the most relevant items from your RSS feeds into the application, even if the article was published weeks or months ago. With the latest push, you can limit the "Relevancy" tab to the last 12, 24 or 48 hours, while the company's algorithms still apply.


The Updated Stream, Which I am Mastering, Highlights New Items

As you can imagine, for somebody like me, who reads practically every article through Google Reader when at the desktop, poring through thousands of items on a small screen like the iPhone can be a challenging task. my6sense's move to reduce clutter in my stream is a good thing, and making sure the items are new is just as important a step. The move is a major reason I recently progressed through the company's levels of digital intuition, reaching "Master" on Sunday. I've been told there are 2 or 3 more levels to go to improve.

Disclosure: my6sense is a client of Paladin Advisors Group, where I am Managing Editor of New Media. My comments on the company's product are always independent, and do not pass their way in advance.

November 22, 2009

Finding Value Even If I Were the Last FriendFeeder...

Since the site's acquisition by Facebook this summer, I have not talked much about my thoughts on the future of FriendFeed, aside from the initial response saying it was not "dead". There hasn't been a major compelling event to do so, but it keeps coming up, so I thought I would share my thoughts, in light of what we've seen since August.

On Friday, I mentioned there are three cores to a successful social service - namely technology, relevancy and community. How one perceives community can be very different depending on one's perspective, and the network's community is a fluid one, depending on little things like the time of day, the day of the week, or the stage of evolution - particularly noticeable by services that first are overrun with geeky early adopters, only to see the mainstream eventually find footing. In light of my heavy use of FriendFeed for the last 2+ years, and the last few months of insecurity on the site in terms of its future, which has seen significantly reduced traffic and use, I have thought a lot about how much time I should invest in a site that, in theory, is seeing some parts of its community reduced.

After much thought, I can see myself deriving real value from the site, even if every single other person I communicate with there regularly were to disappear. For while it's incomparable fun to trade discussions and debates with the tight community there, and to rack up comments and likes, or to contribute my own, like scattershot, through my feeds, there are many different reasons I have been making FriendFeed my social media nervous system, which have nothing to do with the "Community" aspect - and try as I might to reset my browser home page to another address, I keep going back to the old standby, because FriendFeed works so well.

1. It's Still The Best Aggregator In Town

While there have been many attempts at aggregation services over the last few years, FriendFeed made the most robust and easy to consume aggregation service out there. FriendFeed can provide a single page to view all of my activity, just as it originally set out to do in late 2007.

2. It Still Has All My Friends' Content In One Place

Even if people stop using a site, their content continues to flow through FriendFeed - with the small exception being the handful of users, who for whatever reason, deleted their accounts outright. This means that, in addition to Google Reader, Twitter Lists or other services I am using, I can click out and find interesting news.

3. It Still Acts As A Fantastic Distribution Engine

FriendFeed lets you send specific services' updates to Twitter automatically, based on your preferences. This means that, if I choose to, bookmarks I make on Delicious automatically can flow, through FriendFeed, to Twitter. So too can my updates on SmugMug, while I try not to drown my Twitter followers with Google Reader shares. Having this take place automatically is still much easier than using the "Send to" feature in Delicious for every single item.

4. It Still Has The Deepest Social Media Search Online

Twitter's search utility still only goes back a few days. New partnerships with Google and Bing, as well as many different third party search engines are trying to make things better, but they don't compare with FriendFeed, which lets you search all FriendFeed users' updates, going back to the beginning of the site. Using the site's advanced search functionality, you can search specific services, specific people, or even find specific posts that had comments from a single individual.

5. It Had Saved Searches and Lists Long Before it Was Cool

Before Twitter introduced lists, FriendFeed had already enabled me to set up lists of folks I follow, so I could reduce my entire social stream to specialized groups. It also provided the option to save searches, including advanced searches, in my sidebar. So rather than invent the wheel somewhere else, and redo effort, my customized experience is already set to help me find data fast.



So What Is the "Future" of FriendFeed?

At Friday's Real-Time CrunchUp, hosted by TechCrunch, Paul Buchheit, co-founder of FriendFeed, now working at Facebook, said the site was not destined to go away any time soon - even if working on it hasn't been front and center for him or for his colleagues, who are said to be working on infrastructure projects within Facebook, hopefully making that social network even more special.

That the FriendFeed blog has not updated since August, added on to the news that two of the team's small developer base have already left for alternative non-Facebook pastures (Ben Darnell and Gary Burd are the known exits) hasn't helped the community feel reassured that all is well for the future of the site. In interviews, it sounds like work on the site has fallen into elective "20% time", familiar to Google watchers, and many regulars who participated on the site in the last year have chosen to leave.

No matter how you try to massage popular sites, like Twitter and Facebook, they do not equal the product that FriendFeed produced, from a technology standpoint. Now that the community is changing, and some are resigned to a different world, there are pressures on some to consider alternatives, but until I can find tools that solve for each of my issues, as outlined above, there are tons of reasons I will continue to use the product - even if it's assumed I'm looking denial straight in the face.

November 20, 2009

Embrace Our Twitter Ad Overlords, Assuming Relevancy

Those of you who have some history with the blog know that I am not a huge fan of advertising. I skip commercials on my TiVo. I don't click on banner ads online. I switch stations when listening to the radio, assuming I am not listening to my ad-free iTunes library or ad-free Sirius XM radio app. I once said, to some controversy, that most bloggers don't deserve any ad revenue at all, and also took considerable effort to report many Facebook advertisements as being offensive. But despite all this, with official word from Twitter COO Dick Costolo coming that the service will indeed include advertising in the very near future, I am fine - pending any future annoyances. Why? Because I am not anti-advertising. I am pro relevancy.

In my rant against bloggers who don't add clear value trying to get a piece of revenue, I aggressively said "services offer real value, bloggers don't", adding, "Web services are adding real value to the Web by changing the way we interact and communicate. Bloggers, myself included, are not. We are more like consumers than producers in this case, and the last time I checked, consumers pay, they don't get paid, no matter how excited we might be about a product."

After much debate, Twitter, a service which provides value to millions, is looking to bring ads to the table in what they promise will be a unique way. With the growing talent base at the company, there's no doubt they see what has happened to traditional advertising models, and they don't likely want to see a race to the bottom in terms of quality. In order not to damage the trust they have accumulated with users, they will need to provide a new and differentiated approach to this model that derives real value - for the company, for the advertiser and for the viewer. I don't want to see yet another copy of AdSense. I want to see something very new.

Overwhelmingly, most of us in the Tech Web want Twitter to succeed. Despite the many concerns we have had about the service and its occasional hiccups, we recognize its growing role in the world of communication, and see it as a growing player in infrastructure, taking share from e-mail, and my personal favorite, RSS. That the company would have to grow from a revenue-free model to one that has a revenue stream was clear, barring an early buyout from a stable tech leader.

Much of the problems with today's ads, which have seen lower rates for advertising across the board, has been tied to a lack of relevancy. I asked that ad companies would leverage my social profile and give me accurate ads downstream, through utilizing my content-rich Facebook profile or some other site. Twitter has a unique opportunity to know not just what my social profile looks like, but they know what I talk about, what I share, they will know, through geolocation, potentially where I am, and how I am characterized, thanks to lists.

I do not hate advertising. I hate bad, wasteful, untargeted advertising. If advertising is accurately targeted and provides value, it is much like finding a new blog post on a topic I like, or finding a product I really do want to buy. I have seen page after page after service after service that has taken the easy way out and slapped up advertising just because, but if somebody can get the formula right, it can only be good for the Web in general. Good services deserve revenue, and good customers deserve good, relevant, ads. I will hold my breath and hope that Twitter gets this formula right.

The Chrome OS Release Is Not About Now, It's About Next.

Yesterday, as most tech outlets noted, Google previewed their much-awaited Chrome Operating System - and in parallel released the code for the operating system to the open source community. By the end of the day, sites like Gdgt had compiled virtual machine capable installs of the early alpha system, and geeks, including me, were tinkering with the system. Unsurprisingly, there were near-immediate reviews, and some calling the news a disappointment. But for me, the news was not so much about Chrome OS being ready to go, but instead Google delivering on a promise, and showing its cards, before they had to, to let us know what's progressing in Mountain View.

Google's success and growth over the last decade has not been without its detractors. The company, which could once simply be described as a search engine, now has its reach in a dramatic number of Web applications and services. I tend to be rosy on the company's expansion, and even asked last month if it was at this point possible for somebody to use Google software exclusively and not lose functionality.

Google's preview of the Chrome OS was more than a product release. It was a milestone in a vision of a Web-centric world, one in which we are increasingly living. For the vast majority of my own activity, I am online, not using software. I intentionally use some applications, like Microsoft's Office suite or Adobe Photoshop, quickly, and then close them just as quickly, as to not slow down my computer's performance. Google's Chrome OS is the latest development in a vision that says our activity will be online, our data will be stored in the cloud, and applications that have traditionally been desktop software will make their way online.

Under no uncertain terms, I agree with their vision. This is happening and it is happening fast.

When I booted up VMware Fusion last night, and turned on the Google Chrome OS for the first time, it didn't come with an instruction manual, asking me only for my login and password - which corresponded with my GMail account. Logging in took me to the now-familiar Chrome browser, the starting point for the next generation of computing. While today, the experience is not dramatic, thanks to us already being familiar with their browser on Macs and PCs, it was a checkpoint that this was real and happening. There was no way to move the browser off screen and get to the equivalent of a desktop, for it didn't exist. There was no C: drive or System folder. Just the browser and an infinite Web that is capable of taking me anywhere.

So with due respect to my good friend Jason Kaneshiro, who writes: Google Chrome OS: I Don’t Get It and ReadWriteWeb's Sarah Perez, who asks Was Chrome OS a Disappointment?, the main concerns I have seen voiced around limitations on what the OS can or cannot do are much like the concerns people had when the first-generation iMac shipped without a floppy disk drive and ditched Apple's proprietary cables for the new Universal Serial Bus (USB) standard.

Google promised us a new operating system built on the Chrome Web browser. They delivered. They gave us more information yesterday showing that they were working on it. They immediately gave back to the open source community and gave us a way to start tinkering. This is not a situation of ditching the Mac or a Windows 7 machine today, but instead, about pushing us forward to a new reality. If we choose to stay in one place clinging to our old ideas, we will only get further behind.

Technology, Community, Relevancy: The 3 Social Pillars




Why do some social sites thrive while others fail? Why do you find some networks have you dedicating hours every day to participate, while others couldn't get you to raise an eyebrow? And why don't your friends see with you eye to eye on what the best services are, even after you've told them about your favorites time and again? The more I am exposed to new sites and social services, it becomes clear to me that there are three core elements that need to be solved to deliver a killer social service - and falling short in just one can mean rapid closure. Meanwhile, even if all three of these core elements are solved for one person doesn't mean they are solved for everyone.

These three core elements? Technology, community and relevancy.
(Though not always in that order)

Technology

Social service users want to have a flexible array of features that let them accomplish the task at hand quickly, without the user interface getting in the way. Members of social sites want the reassurance that they are working with a leading network that provides high quality tools, keeping pace with industry developments, and not growing stale with age.

If sites do not utilize current technology, not even the most ardent fans can expect to keep loyal, especially as they are reminded of alternative functionality through their ventures on the Web. In this case, solving for a strong community, even with good relevancy, is not enough.

Community

Community can be measured in terms of both quantity and perceived quality. Only the rarest of early adopters wants to participate in a social network that doesn't have any members. Without debating what came first, the chicken or the egg, successful social networks require an active community that will deliver a regular stream of updates - keeping the service fresh and vibrant. On other occasions, visitors to a social site will find the existing community does not meet their needs, as they may have little in common.

Even the most targeted sites with top technology can fail without an evangelizing community to keep it alive. And one man's perfect community is another man's "mob", so just because it works for you doesn't guarantee runaway success.

Relevancy

While most of the talk around social services focuses either on technology or how to grow communities and customers, simple relevancy cannot be overlooked. The most "sticky" communities are those that center around a specific topic or group, no matter how esoteric. From the mommyblogger movement to sports or automobile discussions, being on topic is a must for growing a network.

Without the site's content or community being relevant to potential new users, they would not be likely to want to engage, barring the often misguided belief that the individual could "drag" along a critical mass of friends or followers to have serious impact on the topics being discussed.


When sites hit a two out of the three pillars, it is little better than only focusing on one of the pillars. There are precious few social services that can gain significant traction for the masses, without needing to target specific communities or derive a specific niche relevancy. And we have seen way too many sites have an interesting group of engaged people, only for the technology to look near abandonment, taking the form of a 90s-era social bulletin board or forum.

While Facebook and Twitter have much of the minds' eyes out there right now, there are many other social networks that are seeing strong engagement, tucked away due to their niche focuses. From the team of blogs at SportsBlogsNation and their resulting communities to small business sites like Ecademy, communities are building with relevancy, and some strong technology - helping them to be survivors in a world littered with failures.

I am looking at a lot of social networks these days. I am seeing frantic e-mails from slowing and dying communities asking what is next. There are some new ones I am quite fond of - but they are usually ones that solve for 2 of these 3 issues, requiring some serious help to take them to the next level. If you are manufacturing a social site, or even if you are just a frequent user, think about these three pillars: Technology, Community, Relevancy. Is the site meeting those needs, or is it falling short?

November 18, 2009

Open Web Foundation Speeds Protocols' Legal Contracts



On Tuesday, the Open Web Foundation released an agreement aimed to speed new specifications' ability to be adopted by downstream users, with the intent of spreading open tools throughout the Web. Though occupying the always-complicated intersection of both the legal world and the tech world, the agreement is very interesting. The non-profit organization, featuring leading geeks from many of Silicon Valley's best known and most-respected companies, is hoping to promote data portability and open Web standards, no matter their source. Tuesday's agreement makes it easier for others to implement specifications without requiring lengthy bureaucratic legalities, and already features 10 major protocols and services as having signed up.

Among the services that have committed to using the new agreement include Yahoo!'s Media RSS standard, OAuth, Microsoft's WebSlice, and my often mentioned personal favorites, the PubSubHubbub and Salmon Protocols, being promoted by employees from Google.

As explained on the Yahoo! blog, on Facebook's Developers' blog and at Standards Law, services such as OpenID and OpenSocial were both forced to spend a great deal of effort working on legalities, taking their sharp engineering resources away from doing what they do best - code. The hope is that by setting a standard for approvals and access, much of these headaches can be eliminated.

The agreement itself is lightweight, compared to many legal tomes, and essentially mirrors standards set by Apache and Creative Commons, both of which have much history in the Web community. It covers how to handle attribution, that users can be trusted to leverage the work without fear of patent lawsuits, and that downstream users will not lay claims to others' efforts.

It could be yet another important step in making sure the Web is open, and that users can expect similar behavior and access capabilities from site to site and service to service. See also:
The Blurry Picture of Open APIs, Standards, Data Ownership
from October 29th.

November 17, 2009

How Facebook's News Feed Failed Me (And My Family)

As more and more people are turning to social networks to share their information, practically all of us are connecting to an ever-increasing number of people, and for the most part, we are updating more frequently, and sharing content from different sources in multiple places. The resulting increase in velocity, often termed noise, has led to practically all tools to try and assist us to find the "most relevant" data, or the "best" information, based either on activity from others in our social graph, or through our own past activity. Sometimes, this works very well, helping to make signal out of the noise. And on other occasions, it can dramatically miss the stated goal, and actually make things worse. This week, Facebook's latest enhancements appear to have had a serious negative impact on me (and my family).



As you likely already know, Facebook has been working on a slew of changes to its "news feed", the main column on the site that alerts you to friends' activity. The social network implemented "real time" updates to show you when new entries were posted, and very recently divided the feed into two parts - a "Live Feed" for all updates as they occurred, with the newest on top, and a "News Feed", ostensibly from those who I engage with most often, or for "hot" content - presumably measured through interaction. This is a similar approach taken to FriendFeed's "best of day", PostRank's work on RSS feeds, and Google Reader's new feature, "Magic".

This weekend was a busy one for me, one where I was less connected to the computer than usual. As a result, I checked in to Facebook only a handful of times. Glancing at the News Feed on Saturday, nothing particularly stood out. The same held true on Sunday. I was greeted with updates from friends like Jason Goldberg and Chris Saad, both solid tech entrepreneurs. I also saw notes from Robert Scoble and a handful of connections that originated on FriendFeed. Still, nothing amazing to report.



But after 11 p.m. Sunday night, I saw a friend from high school make a mundane update, saying he had a good weekend, one he would cap off with a round of "Anno 1404." Turns out that's a city-building game, like Sim City. No big deal. I clicked through to his wall to see if he hinted at the good weekend. At the top of his wall, I saw something truly interesting. A simple update, his wall said, "Don likes Malinda Gray's photo." Malinda is my 23-year-old sister. Why would he be looking at her photos? And what photo?

I clicked through, and to my surprise found out that my other sister, 28, had given birth to a new baby boy, her first, making me an uncle. Wow! After more investigation, I found that my sister, as well as my mom, and also the mother of the child, had made posts on Facebook throughout the day Sunday on the progress of the labor, and how things had gone. I also found out that my sister had actually gone into labor and started that process around noon on Saturday - the previous day, and that I had absolutely no clue.

How could I have missed it, considering they had been updating Facebook regularly, and amassing a good share of comments and likes with each update? Well, apparently, Facebook didn't figure out that this update stream was relevant to me. It didn't realize and start sending - with alarm bells - that Louis's sister was having a baby. It didn't realize that photos from my sister, both of them, of a new baby, and the hospital just prior, were more important, than a random "OH" via Twitter from Chris.

Facebook's filter failed me. While, yes, I could have clicked on each of my individual family members' profiles at any point over the prior 24 hours, or yes, I should maybe make a Family-only list and make sure to visit it regularly, I've so far trusted the network to do a good job at gauging relevancy. Yes, it's true that I interact more often with Jason Goldberg or Johnny Worthington on Facebook than I do with my own family, but in this case, the News Feed hid the only truly relevant thing that was going on this weekend, and we missed it.

I explain further in the below video:

November 16, 2009

Inefficiency of Interaction Driving Need for Social Leverage

"It is a complete joke how we interact with people on the computer right now," believes Brad Feld of the Foundry Group. With multiple devices and scads of Web services needed to consume information and engage with others on the Web effectively, Feld and other venture capitalists are looking for ways to fund the next generation of companies and products designed to leverage social connections, reducing information overload and enabling simpler collaboration in the enterprise. In a keynote panel at the Defrag conference last week, five venture capitalists explained how they thought they could help companies take advantage of social experiences that are being forged, which, if successful, could supplant the way we discover information today.

Today's Web is one that is largely search driven, leading to Google's position as both king and king maker. But Union Square Ventures VC and principal Fred Wilson said that much of the information we discover and the links we click on are coming through social experiences, instead of from search or navigation. Taking advantage of this activity woul be a logical extension for companies, and therefore, for VCs looking to enable it to happen.

Roger Ehrenberg of IA Capital Partners, whose fund is behind BlogTalkRadio, Mashery, TweetDeck, Bit.ly and many other services, said that despite advances, the Web continues to have a problem of finding relevant information, and identifying people who would be potential connections. Social leverage should enable you to tap into the knowledge of your peers to bring the information to you, at your own pace.

"It's an opt-in world, and you can let people in as they deliver information," Ehrenberg said. "When you look at all this information you are receiving, you need to build in filtering."

As I have often stated, I believe "there is no information overload", or at least, there can't be without your explicit permissions. Feld argued that the perception of information overload is due to individuals' approach to how they consume data, more so than an increase in total data.

"We are stuck with a rigid set of distribution models," Feld said. "You can be rigid or disciplined about one type of information, or you can let whatever comes at you come at you. The computer has to get a lot smarter about what to do with all that stuff, and it needs more adaptability. We are in an era over the next decade where there will be a fundamental shift."

Over the last 20 years, Fred Wilson has said he has already seen two major shifts in communication. In the 1980s, as he started in the venture capital business, events would lead to business cards, which led to hours on the phone chasing deals. The 90s brought e-mail and the ability to hit an estimated 10 times as many people. Blogging then let him reach more than 10,000 people a day, which he called "a different interaction model."

"I still see the deals I want to see and I can see better deals because of it," he said. "E-mail is a heavy interaction model, which is a lot of work, but a blog post is easy for me... I don't even use the phone any more."

Feld and Wilson have found ways to adapt to the changing dynamic to continue their efforts in business, but with so many others feeling an information onslaught, lacking proper filters to reduce the noise, this too sets up the potential for new solutions and new services.

"The biggest opportunity is the opportunity in the enterprise for social leverage," said Jim Tybur of Trinity Ventures, "There are tons of opportunities for that to permeate throughout business. There is a place where e-mail and social streams can coexist effectively."

Streams May Impact E-mail, But Won't Kill It Any Time Soon

As trendy as it can be at times to say that the new social "activity streams" are set to be the future of our communications, including most social networks and the nascent Google Wave, it is clear that e-mail has some serious life ahead of it. While many can complain about the growth of e-mail messages, the replacement of actual messages from people with simple notifications by robots, and a march toward "In Box Zero", this form of transmission is not going to be deleted for the foreseeable future, even if it morphs to adopt more social functionality. In an intriguing discussion at last week's Defrag conference, it was suggested e-mail could tap into the social networks, and that the most adept e-mail users would have advantages over those less savvy, but nobody called for its death. In an attention-grabbing blogosphere, that's a rare thing indeed.

Tim Young of Socialcast, reflecting on the move to activity streams in many of those networks we inhabit, echoed a belief of mine, saying that as information consumers, access to more data is key to our continued growth and adaptation to a changing world. He even took a second step to say that our ability to adapt quickly will promote the best discovery artists to the head of the pack.

"Information foraging is core to our human psychology," Young said. "It is the energy source for our minds. We hunt and gather for information to understand and adapt to our world. Natural selection favors organisms with the best food foraging strategies. In the future, natural selection will favor people and enterprises with the best information foraging strategies."

For the last two decades, one of the most frequent methods for finding information and distributing outward has been e-mail. E-mail, like blogging, is well known for offering the function of rich communications and longer-length missives, not restricted by the limits often found through mobile phone usage, Twitter and other sites. Unsurprisingly, a good number of e-mails are made to be sent to multiple recipients as well.

Alexander Moore of Baydin reported in a study of more than 250,000 e-mail messages (via Enron) and nearly half a million tweets, that 42 percent of e-mails are multi-recipient, contrasted with only 6 percent of tweets being for multiple recipients, but he did say our immersion in the social Web had shown signs of affecting the way we send messages.

"We are conditioned from using Web 2.0 services," he said. "E-mail is moving toward shorter messages due to the rise of mobile phones. E-mail is going to be around for a while, but there are things we can learn form Twitter, Facebook and social media."

The conference's panelists largely agreed that e-mail needed improvements, much like an evolution instead of a revolution. Michael Cerda of cc:Betty said "E-mail is on its last leg, but that leg is going to be for a long time," adding he preferred an e-mail box full of grouped conversations instead of individual messages. In parallel, Matt Brezina of Xobni, said he thought e-mail could be made more social through exposing relationships that live in e-mail, possibly even sharing attachments and e-mails with an extended social graph.

Brezina said that Xobni was formed as a plug-in to Outlook instead of an Outlook replacement as "people hate changing their workflows," saying his product "generates more e-mail happiness" and increased worker productivity. Similarly, Cerda asked to "waken up the data and bring it to life". One way to do this, as Moore recommended, was to make feedback on e-mail more public. Why not add a "like" button to e-mail as there is in Facebook, to give the sender credit, when most feedback on e-mail today is private?

The day's panelists looked to a future that keeps e-mail around, but starts to see the integration of more social activity, borrowing from the world of social. The medium is not limited in the same way that many social networks are, but its sheer age and its occasional overwhelming nature has people asking what's next. One of the major reasons it hasn't gone away? As Moore said, it is "rich in content, rich in conversation and rich in control". Nearly 9 of every 10 e-mails is 140 characters or more.

November 15, 2009

Leveraging Social Marketing for Business, Sales and Startups

Following on to the post last month on leveraging social networks to build Web traffic, courtesy of YourBusinessChannel, filmed while in the UK with Ecademy, three more short videos have surfaced from our extended interview on the impact that social media tracking and activity can have for companies big and small on the Web - be it through connecting with potential customers, or simply expanding their brand. The three videos are embedded below - proving to me that I sound as tired as I felt, having just completed a five-hour presentation following the San Francisco to London Trip the day before.


Social Marketing Strategies a Boon for Business


Sales Advice for the Social Web



What Can Social Marketing Do for Startups?

November 12, 2009

Paladin Advisors Group: My Own "Stealth" Startup

Over the last four years on this blog, you have seen me talk a lot about hundreds of different startups and dozens of large enterprise companies. I have tried to share with you how I consume information and disseminate it outward to the many social networks. I let you know my thoughts on gadgets and hardware, and we have had open conversations about the culture of Silicon Valley, the future of blogging and social media, and we have discussed best practices and trends. But what we haven't talked about much is my job - because for the most part, this blog is as much about you as it is me. But over the last five to six months, I have been working on my own "stealth" startup, gaining clients - and it is soon coming time to tell you all about it. (Especially as Marshall Kirkpatrick mentioned it last night)


Marshall's Tweet from Last Night Deserved Answers

Paladin Advisors Group is a strategic advisory firm for startups and enterprise companies who are looking for guidance in their marketing, public relations, sales processes, customer influence, Web and social media. For the firm, which is a handful of partners large, I am the Managing Director of New Media.


Follow Us On Twitter at @PaladinAG

Over the last few months, I have been working with enterprise companies including Emulex Corporation, and startups, including Kosmix.com and My6sense. For startups, as I have done informally for years, I have been working with them on product feedback and focus, quality assurance and visibility. For enterprise companies, my focus has been on integrating social media and blogging into their strategies, aligning on messaging with PR, marketing and customer service.

As with the advisory roles I have gained with SocialToo, BuzzGain, ReadBurner and others, I will always provide transparency to you and full disclosure on any relationships - and I hope that over the last few years with my activity here and on the downstream social networks, I have gained your trust to provide clarity.

Why do I call this new venture "stealth"? Because it is new, and I haven't made a lot of noise about it. In fact, our Web site is under development. But you can follow us on Twitter to be notified as soon as we have more announcements. (http://twitter.com/paladinag)

And... if you think our services might be a fit for your business, e-mail me at lgray@paladinag.com.

Social Networks' Traffic Stabilizes, Facebook Nears Yahoo!


Facebook Up Slightly, MySpace and Twitter Flat to Down

Despite November being nearly half over, the monthly traffic statistics from October have just been released by Compete.com, and it looks like there are no major surprises in the social networking arena. Despite the recent improvements and continued hype, traffic to Twitter.com decreased slightly, by 2 percent, month over month, tracking at the level it saw in June of this year, and lower than the previous three months. Facebook, the #3 site overall worldwide, behind only Yahoo! and Google, climbed more than 3 percent, to almost 129 million, while MySpace stayed steady around 50 million unique visitors (15th overall).


FriendFeed and Posterous Decline - While Twine Plunges

Where one saw more movement was in the lower tiers, as FriendFeed continued its descent following the Facebook acquisition, shedding nearly 7 percent of visitors, dropping below the 700k mark, from a one-time peak above 1 million, and Posterous dropped more than 12 percent, showing just under 1.2 million visitors. Twine, which once peaked above 2 million, is now just over 120,000.


Yahoo!'s Slow Decline Comes as Facebook Rises Toward the #2 Spot

Facebook's slow but steady growth actually has them looking less in the rear view mirror, toward companies like Twitter (who scored 23 million uniques vs. Facebook's 129 million) and more at the big gun right ahead of them - Yahoo!, which continued its slow descent, dropping just over 1 percent, to 135 million unique visitors. In fact, one more month with the same trajectory would have both networks tied at about 133 million visitors, so we could see a change in placement come November.


Google's Position at #1 Remains Unchallenged (Shown With YouTube)

Unsurprisingly, Google reported in at #1, again, counting almost 150 million unique visitors in the month, according to Compete (which in my opinion is probably low). In addition, the company's YouTube subsidiary tracked just under 85 million unique visitors, good enough for the #5 position worldwide on its own. GMail continued its climb to another 9.3 million visitors, up 98% from this point last year.

Surprisingly, GMail's position is more than 3 times higher than that of Hotmail.com, which has even been surpassed by Apple's Me.com MobileMe e-mail offering. Me.com sported 3.5 million visitors, growing 98% year over year, contrasted with Hotmail's 2.5 million, which decreased 7 percent, according to Compete.


LinkedIn Stays Hot - See Versus Twitter

Interestingly, during the recession, with high unemployment, LinkedIn.com traffic increased 3.3 percent in the month to 15.5 million unique visitors, up 89% on the year. Monster.com, the massive job site, tracked in with 41.5 million unique visitors, good for #20 in the world, up 47% on the year.

Some other sites of note:
  • Apple.com traffic tracked at 21.4 million, compared to 15.5 million for hp.com and 13.4 million for Dell.com.
  • Digg.com traffic decreased less than 1 percent, up 57% on the year, good for 43 million uniques.
  • Technorati.com traffic was flat, with only 2.8 million unique visitors.
Disclaimer: Compete statistics are known to be imperfect, but they are always interesting.

November 11, 2009

Attacking the Web's Beverly Hills and Schenectady Problem

Not too long ago, every new site you joined on the Web forced you to provide a daunting array of details about you in order to join. Full pages of pull-down menus asking about your date of birth, your marital status, your home address and other information was standard. But over the last few years, with advents such as OpenID, OpenSocial, Facebook Connect, and more recently, Twitter OAuth, personal identities are becoming portable - letting you sign in with a dedicated login to a new site, and reducing your need to store yet another password.

Kevin Marks, vice president of Web services at BT, formerly of Google and Technorati, relayed at the Defrag Conference this afternoon that under the old way, companies, after accumulating a high number of users, would often find they had an extremely high number of users responding they lived in either Beverly Hills or Schenectady, New York. Why? Because they were saying their zip codes were either 90210 or 12345. They were lying - sick of answering page after page of personal data for yet another Web site.

In the years since, thanks to efforts like OpenSocial, we have seen the rise of Web standards that interoperate, letting you pass along your personal information and credentials to new sites without having to create yet another user name and password.

"Over the last two years, we worked out the sanitization of protocols, so it could fetch things from one site to another," Marks said. "In that time, OpenSocial is up to 1 billion users. There are sites all over the world who are using this."

Marks broke down the solution to the real identity problem into four pieces:
  • Me
  • My Friends
  • What We Do
  • The Flow
Tools like OpenID and WebFinger solve for "Me", Portable contacts, through the unification of the Vcard specification, solve for "My Friends", activity streams solve for "What We Do", and new protocols like AtomPub, PubSubHubbub and Salmon are solving the "Flow". As you know, I have been a big proponent of tools like PubSubHubbub, Salmon and tools like Facebook Connect and Twitter OAuth, as they not only pass along data between sites, but also make data pass between sites more quickly. And while they are causing what could be considered a revolution, it is happening through the simple evolution of activity that is already happening.

"All these standards are empirical standards," Marks said. "We first did this with microformats. We asked what people are doing already, and agreed we would do the same thing."

Now, if you do tell companies you live in Beverly HIlls or Schenectady, New York, there's a greater chance that you really do, and maybe we'll believe you.

Search: Less Useful Due to Massive Info Growth, the Flow?

In a forward-looking presentation at the Defrag Conference this morning, Stowe Boyd pushed attendees to think about how the Web would look by the year 2019, with the aid of seeing the massive amounts of change that has taken place over the previous decade. One of Boyd's most-aggressive comments stated that the world of search is falling apart, as the problems it initially aimed to solve have been eroded thanks to the information explosion and the corresponding ease of access to social connections in a world of real time. Without saying that social networks would render the established search giants, irrelevant, he suggested, as he has on his blog frequently in the last few years, that the "flow" will replace the world of Web pages - and change the game on search entirely.

Boyd essentially argued that social tools are in the process of changing the culture. He said people were incentivized to discover breaking news from social friends through networks like Twitter and Facebook, which makes the new "real-time Web" interesting. He further suggested that how one interprets this news to define "meaning" is what will replace search.

One of the biggest reasons he thinks meaning will replace search is that the initial argument for search engines was trying to find the few documents on the Web that were relevant to your query, and now, practically any search can deliver millions of results.

"Search is starting to fail because scarcity has been replaced by infinity," Boyd said. "We are heading toward a world where all the critical information is available publicly, and breaking news is a few seconds away - at the most. We will switch to instead relying on finding things through our social connections - engines of meaning, and the source of what is important."

Assuming social elements are going to trump algorithms and crawlers that power today's engines, Boyd said he believes that the most important dimension is now time, not space - and that for the most part, this dimension is shared.

"We are not sharing space online, we are sharing time," he said. "Our time is increasingly not our own. A shared thread of time will be the norm, and how we will get work done."

This new shared thread of time, or "flow", as Stowe referred to it, is poised to become the replacement for today's static Web pages, a new element in today's social Web, which he pontificated could be "the most defining moment of our civilization."

Skepticism Over Current State of Social Web at Defrag

At the Defrag conference in Denver this morning, there was an acknowledgement that social elements are infiltrating practically every aspect of businesses and interpersonal engagement online, but unlike other events, which have seen a practical hugfest over the latest apps or services, the morning's speakers expressed a great deal of frustration over trying to find real benefits and utility to all the activity that is happening online. Speakers suggested today's tools have a stark lack of context, that businesses are too obsessed with having a complete data set and aren't focused enough on the actability on that data, and that many developers are focused on designing apps that simply don't drive benefits.

Eric Marcoullier, CEO of GNIP, was most direct in his comments, saying that "the business world doesn't give a (crap) about your lifestream app," saying that designing yet another application that sorts all your content online is essentially a list of lists - a list of "my stuff" or "my friends' stuff", which is cute, but not necessarily valuable in decision making.

GNIP is best known for offering managing data collection as a service. The company has seen some ups and downs over the last 18 months, culminating in a significant layoff in September that saw the company reduce staff - cutting seven heads from the dozen on their roster. But since the move, Marcoullier said the last few weeks have been "stellar" in terms of productivity, even as his clients aren't necessarily looking for the answers to data - just more data.

He asked, "Is there an opportunity to drive business decisions and revenue for your company?", saying "Data is useless without effort. When you get data, it is a lot of work to do something useful with it, yet market research companies are obsessed with completeness of data."

Similarly, T.A. McCann, CTO of Gist, said that leading social services, like LinkedIn, have curated millions of nodes, tracking millions of relationships. But for most, it hasn't yet been clear how these connections can be leveraged to drive real daily utility - beyond suggesting new connections and companies that should be known due to shared interests.

Much of these shared interests have been displayed in social streams including Twitter and Facebook, which despite their meteoric rise in visibility, are still struggling to provide more than a simple flow of updates and links.

Tim Young of SocialCast complained, "What I find on Twitter is link vomit, or link carpet bombing and swarming about events. During the day, I get all these links, and the issue is I click the link and there isn't a lot of context. Why did they share this and how did it get here?"

Tim called for a new solution to be built that would save traces and paths of content to help communicate new findings to derive value - something made ever more difficult when the most common real time search repository, Twitter search, is now hosting a database that can track as few as only two days.

And despite many people's claims that finding this data ever more quickly is going to make us more productive as a species, Stowe Boyd dumped on that, saying "the myth of increased productivity is a failed world view," adding, "people will trade personal productivity for connectedness, and they will accept an interrupt to help somebody in their social connections."

That's not to say all is dark. Eric of GNIP promised he was still a huge fan of social media, and Stowe pontificated that the rise of the social Web may already be "the most valuable artifact ever created". But from a raft of useless lifestreaming applications and a gap between link visibility and link utility, the speakers seem to agree that we have a long way to go from today's promises to tomorrow's solutions.

Twitter Plucks Data Management Guru from Yahoo!

That Twitter is dealing with massive amounts of data flowing through its servers these days would be an understatement, as the service sees strong growth and significant mindshare. With the company having passed what looks to have been its rockiest struggles over the last twelve months, Twitter is now getting to focus on rolling out some significant new features, from Lists to geolocation, trend definitions and retweets. But the microblogging giant looks like it is taking extra steps to harness the power of its rapidly-expanding data set.

If the company's own team list is to be believed, they just picked up Utkarsh Srivastava, a highly respected senior research scientist at Yahoo!, who is best known for his work on building large-scale distributed systems, specifically his efforts with Hadoop.

Hadoop, similar to the Google File System, is a framework that enables applications to work over distributed server nodes and significant data sets - potentially ranging in the petabytes. Yahoo!, Google's off and on competitor, has been the company most associated with Hadoop. While at Yahoo!, Srivastava was one of the original designers of "Pig", an Apache project for analyzing large data sets, which leveraged Hadoop. (See also the research paper: Pig Latin: A Not-So-Foreign Language for Data Processing)

Srivastava, a PhD graduate from Stanford University in Computer Science, has been working at Yahoo! Research since 2006. (See his home page and LinkedIn profile)

Not knowing what aspects of Twitter Srivastava may be working on, it's premature to assume whether his efforts will be primarily focused on new initiatives, or simply helping the company scale its growth. I can dream and hope that he can be the missing piece that brings Twitter's high potential search engine fully online, but that is no doubt a big project indeed.

Update: This hire has been confirmed by Srivastava and also covered by TechCrunch.

November 08, 2009

The Story of Google's Closure: Advanced JavaScript Tools

On Thursday, Google caught the eyes of Web developers around the world with the company's move to open source its Closure JavaScript compiler, library and template system to the Web community - the very same tools that power popular applications, including GMail, Google Docs, Google Maps, Google Reader, and no doubt many others. The Closure tools optimize Web code to be compact and high-performance, essentially reducing page load and redraw times while also enabling uncompromising capabilities. Around the Web, you could see the release elated geeks both inside and outside Google, many of whom previously worked with the tools while working for the Mountain View tech giant.

To better understand these tools, and get a real-world perspective on Closure, I reached out to Mihai Parparita, an engineer on the Google Reader team, to hear of his experience. He was gracious enough to extend a very thorough overview, explaining the tools' origin and use case, by e-mail, much of which is summarized below.

The Closure compiler dates back to GMail's launch in April of 2004. Paul Buchheit, now of Facebook, via FriendFeed and previously Google, largely credited for the founding of GMail, highlighted the announcement this week on his FriendFeed, calling it the "Gmail JavaScript compiler". The library and template system were initiated a few years following.

As Google Reader development started in early 2005, with Mihai, Jason Shellen, Chris Wetherell (the latter pair now are at Thing Labs working on Brizzly, which also uses Closure) and others working to make a top-notch Web-based RSS reader, the team leveraged Closure immediately after the initial prototypes. At the time, the team was less focused on download size than they are today, but the compiler's aggressive function checking improved error detection.

Mihai writes:
"Until the last month or so leading up to the Reader launch in October 2005, the size benefits of the compiler were less important, since we were less focused on download time (and performance in general) and more on getting basic functionality up and running. Instead, the extra checks that the compiler does (e.g. if a function is called with the wrong number of parameters, typos in variable names) made it easier to catch errors much earlier. We have set up our development mode for Reader so that when the browser is refreshed, the JavaScript is recompiled on the server and is used with the page when it is reloaded. This results in a tight development loop that makes it possible to catch JavaScript errors as early as possible."
As the library and template systems did not arrive until approximately 2006, Reader utilized homegrown code in their place that provided similar functionality, including handing different browser versions and quirks, Mihai said. But as soon as they were available, Reader used the new tools for new code, and later, to replace old shared libraries and homegrown code. Mihai said he performed an audit to detect usage of the old code, and find their Closure equivalents, so work could be distributed among the team during so-called "fixit" periods, when attention was given to code quality instead of new functionality.

With Closure implemented, benefits to Google Reader users are clear. Mihai estimates that without Closure, Reader's JavaScript code would be a massive 2 megabytes, which reduces to 513 kilobytes with Closure, and all the way down to 184 kilobytes using gzip, supported by nearly all browsers. Additional benefits include the near-elimination of concerns around browser differentiation, and an extremely manageable large JavaScript codebase "that doesn''t get out of control as it ages and accumulates features", he said. (Note download time was given as the main reason Robert Scoble has moved away from Reader and that the team recently made a push to even further optimize the code)

Closure's role at Reader, initially utilized in low level code, has "moved up the UI stack" to to the point where it is leveraged for UI widgets. Mihai says "this means that it's not a lot of work to do auto-complete widgets, menus, buttons, dialogs, drag-and-drop, etc. in Reader."

The excitement around Closure's release was palpable from developers through Silicon Valley and beyond as you could see from blog posts by Erik Arvidsson, a co-creator along with Dan Pupius, and a series of posts at bolinfest.com. Other excited Tweets came from Mike Knapp, the aforementioned Chris Wetherell and Kushal Dave.

As Mihai says, "You can tell that there's something special about this when you look at the ex-Googlers cheering about its release. If it had been some proprietary antiquated system that they had all been forced to use, they wouldn't have been so excited that it was out in the open now."

Like many other projects at Google, Closure's compiler, library and templates were derived solely as 20% projects and are largely still dependent on work done in so-called 20% time at Google. Mihai says that if one project needs a feature from the compiler or the library, they are encouraged to contribute to it as well.
"To give a specific example, Reader had some home-grown code for locating elements by class name and tag name (a much more rigid and simplified version of the flexible CSS selector-based queries that you can do with jQuery or with the Dojo-based goog.dom.query)," Mihai said. "As part of the process of "porting" to the Closure library, we realized that though there was an equivalent library function, goog.dom.getElementsByTagNameAndClass, it didn't use some of the more recent browser APIs that could it make it much faster (e.g.getElementsByClassName and the W3C Selector API). Therefore we not only switched Reader's code to use the Closure version, but we also incorporated those new API calls in it. This ended up making all other apps faster; it was very nice to get a message from Dan Pupius saying that the change had shaved off a noticeable amount of time in a common Gmail operation."
Now clearly I'm no developer beyond simple HTML and JavaScript, but I know good Web apps when I see them, and Google's Web apps (as well as Brizzly) are among the best in the world. They have managed to take what used to require massive software installs and make them relatively lightweight Web instances with similar functionality between services. With the release of Closure, sharp Web developers will be looking to leverage these JavaScript libraries and tools to make their own products best of breed - something that will benefit the Web as a whole. I appreciate Mihai's openness, and his willingness to share the story behind the story.