Getting Real about Making Lasting Technology

Years ago, techies had different phrases to describe the explosive growth of computing software and networking. The “Unix development model” became “the Internet development model”, as software was deployed first and made to work in successive patches and iterations. It was exciting times — a “killer app” could launch a technology, especially if it supported the “network effect” (it was useful even if only a few people were using it, and only became more useful as more people took it up).

It seems to me that those phrases fail to do justice to the effort and ecosystem that allowed technologies like the Internet to take off and continue on a growth curve for decades.

As Open Source Software (OSS) is increasingly important to commercial endeavours, and companies worry about becoming heavily dependent on something that becomes abandonware, I’d like to offer three things to look for in technology/systems if you want to see them take off and last beyond initial specs.

Continue reading

Real Money Can’t Buy Routing Security — But BitCoin Might?

Last week I learned something about BitCoin that entirely changed my view on the urgency for implementing uniform security in the global routing system. While I value and appreciate that the Internet is made up of a network of independently owned and operated networks, I think there is compelling reason for network operators to peer over the parapet of their network borders and focus on routing security as a contribution beyond their own realm.

For most of the history of the Internet, the being in the routing business meant delivering packets on a “best effort basis”. In practical terms, as the Internet has gotten more commercially important, that has meant that individual network operators have focused on improving the efficiency and effectiveness of traffic within their own networks. For handoffs between networks (routing to the rest of the world), the emphasis has been on ensuring connectivity to well-positioned neighbour networks.

Continue reading

Mobile Agents, man!

Over twenty years ago, we said it was a bad idea. Then the tables were turned, in the name of making the Internet commercially viable, and we’ve been living with the consequences ever since. The current “information economy” (aka, software and services spying on users) is “mobile agents” in reverse.

A quarter of a century ago, when the Internet was just blooming in the world, and technology innovation was everywhere, there was discussion of software agents. These were typically outlined as bits of code that would “act on your behalf”, by transporting themselves to servers or other computing devices to do some computation, and then bring the results back to your device. Even then, there was enough security awareness to perceive that remote systems were not going to be interested in hosting these foreign code objects, no matter how “sandboxed”. They would consume resources, and could potentially access sensitive data or take down the remote system, inadvertently or otherwise.

I know, right? The idea of shipping code snippets around to other machines sounds completely daft, even as I type it! For those reasons, among others, systems like General Magic’s “Magic Cap” never got off the ground.

And here is the irony: in the end, we wound up inviting agents (literally) into our homes. Plugins like ghostery will show you how many suspicious bits of code are executing on your computer when you load different webpages in your browser. Those bits of code are among the chief actors in the great exposition of private data in today’s web usage. You’re looking at cute cat pictures, while that code is busily shipping your browser history off to some random server in another country. Programs like Firefox do attempt to sandbox some of the worst offenders (e.g., Facebook), but the problems are exactly the same as with the old “agent avatar” idea: the code is consuming resources on your machine, possibly accessing data it shouldn’t be, and generally undermining your system in ways that have nothing to do with your interests.

With the growing sense of unease over this sort of invasive behaviour, the trend is already being slowed. Here are two of the current countervailing trends:

  • Crypto, crypto everywhere — blockchain your transactions and encrypt your transmissions. That may be necessary, but it’s really not getting at the heart of the problem, which is that there is no respect in information sharing in transactions. Take your pick of analogy — highway robbers, thumbs on the scale at the bazaar, smash-and-grab for your browser history, whatever.
  • Visiting increasingly specific, extra-territorial regulation on the Internet, without regard for feasibility of implementation (GDPR, I’m looking at you…). Even if some limited application of this approach helps address a current problem, it’s not an approach that scales: more such regulation will lead to conflicting, impossible to implement requirements that will ultimately favour only the largest players, and generally pare the Internet and its services down to a limited shadow of what we’ve known.

A different approach is to take a page from the old URA (“Uniform Resource Agent”) approach — not the actual technology proposal, but the idea that computation should happen (only) on the computing resources of the interested party, and everything else is an explicit transaction. Combined with the work done on federated identity management,  those transactions can include appropriate permissions and access control. And, while the argument is made that it is hard to come up with the specifics of interesting transactions, the amount of effort that has gone into creating existing systems belies a level of cleverness in the industry that is certainly up to the challenge.

Who’s up for that challenge?

TLS 1.3 — what is it, and who cares?

What is the news:  publication of the updated security standard for Internet transport layer security:  TLS 1.3

Why it matters:  TLS provides the basis for pretty much all Internet communication privacy and encryption.  The big deal with version 1.3 of TLS is that it has been stripped of features with previously-detected vulnerabilities, and extended its security and encryption.  TLS 1.3 should be more robust/even less vulnerable than TLS 1.2.

Who benefits:  TLS 1.3 only benefits people using applications and devices that implement it.  The good news is that, apparently, major browsers have already implemented and deployed it.  Additionally, the hope is that the lighter weight, more straightforward nature of TLS 1.3 (as compared to previous versions) will be attractive to other application and device developers that have been reluctant to implement TLS in the past.

More info:

Keeping the “Inter” in “Internetworking”

Last week, I had the opportunity to attend and speak at Interop ITX 2018, in Las Vegas.  It was my first Interop — and an interesting opportunity to see more of the enterprise networking side of things.  That’s a space that is growing, and increasing in complexity, even as cloud and software as a service are notionally taking on the heavy lifting of corporate IT.

The refrain that I overheard several times captured that attendees really enjoyed having a vendor-neutral conference to talk about a variety of practical topics.  Indeed, it felt a bit like a NOG with an enterprise focus.

Continue reading

When Permissionless Doesn’t Mean Permissive

“Permissionless innovation” is one of the invariant properties of the Internet — without it, we would not have the World Wide Web.   Sometimes, however, this basic concept is misunderstood, or the expression is used as an excuse for bad behaviour.

Consider the case of the Guardian’s article on the use of “smart” technology, such as wifi trackers that tail people through their phones’ MAC addresses, to monitor activities in Utrecht and Eindhoven:

“Companies are getting away with it in part because it involves new applications of data. In Silicon Valley, they call it “permissionless innovation”, they believe technological progress should not be stifled by public regulations.”

Continue reading

Something Wicked this Way Comes

“Cooperation”, “Consensus” and “Collaboration” are three C-words that get thrown around in the context of Internet (technology and policy) development.    Given that it is at the heart of the Internet Engineering Task Force’s organizing principles, I was a little surprised  to see consensus treated as a poor discussion framework in Peter J. Denning and Robert Dunham’s “The Innovator’s Way – Essential Practices for Successful Innovation”.

While I still don’t entirely buy the authors’ view of consensus as a force of creativity stifling, with a little more reflection, I could see their argument that consensus aims to narrow discussion to find an outcome.  In an engineering context of complex problems,  when the problem is well understood and an answer has to be selected, that’s a good thing.

However, for many of the challenges facing the Internet, there isn’t even necessarily agreement that there is a problem, let alone a rough notion of what to do about the challenge.    These are wicked problems, requiring more collaboration across diverse groups of people and interests.   The heartening thing  is that we’ve actually solved some of these wicked problems in the past — the existence and continued functioning of the Internet is testimony to that.

Continue reading

Collaborate or Fragment: Net Futures


At any given moment, the Internet as we know it is poised on the edge of extinction.

There are at least two ways to understand that sentence. One is pretty gloomy, suggesting a fragile Internet that must continually be rescued from doom. The other way to look at it is that the Internet is always in a state of evolution, and it’s our understanding of it that is challenged to keep up. I tend to prefer the latter perspective, and think it’s important to keep the Internet open for innovation.

At the same time, change can be scary — if it leads to an outcome that impacts us badly, from which we cannot recover, for example. That’s at least one reason why discussion of policy requirements and changing the Internet can be pretty tense.

If we want a dialog about the Internet that is as global as the network itself, we need to know how to talk about change:

  • what are the useful points of reference (hint: they aren’t particular technologies), and
  • how can we frame a successful dialog?

Continue reading

Mind your own fitness tracker

In some ways, it reads like a bad novel: “Every Step You Fake” (, a Canadian study of privacy and security in personal fitness devices. The report outlines two key areas in which these devices have significant security and privacy shortcomings — but just as you feel sympathy for the devices’ wearers, you learn they may be the “bad actor” in other cases. We can spot adversaries in every direction, but who’s the hero of this drama? And, frankly, does it need to be a drama?
The two shortcomings outlined in the report are that:

  • the devices’ radio-based transmissions can “leak” your presence and make you trackable (anonymously) through shopping malls that do that sort of thing; and
  • it’s possible to fake out some of the website collection servers so that you can “adjust” your results.

Well, wait. Why so much drama around a device you electively wear on your person? What are the actual problems that need solving here?

Continue reading

5-in-5-in-5: the future of the Internet

I had a fun time on  ISOC-DC’s #5in5in5 panel — talking about five things that will be different about the future of the Internet, in 2020.  There’s video of the panel available on ISOC-DC’s livestream TV site: .

My 5 points, delivered within 5 minutes, covered:

  1. The Internet continues to try to “get out of the box” — increasingly, we don’t see “the Internet” as separate from our tools or tasks.  They merge.  Sometimes  it’s seamless — you might carry out a text message conversation with someone from your phone, and then answering from your computer as you walk by, and picking it up again on your iPad.  Each device has all the context.  You’re not thinking about which network you’re using to send the messages.   The downside is  the loss of individual control over your Internet experience — having your car hacked over the net while you’re driving it is the downside of all that invisible interconnectedness.  Also, the Internet becomes opaque, another packaged commodity, and a lot less likely to be something we can all hook into, climb onto, and understand.
  2. Various forces — policy makers, big business, whatever — are trying to put the Internet into a box, or some structure.  For example, regulations requiring that the Internet not serve particular information beyond geographic boundaries is essentially implemented by aligning the actual Internet network with those geopolitical limits.  I have had a lot to say about the challenges with that approach, but suffice to say that the Internet was not built to pay attention to political lines, and imposing those structures reduces its resiliency and its efficiency and effectiveness.
  3. New approaches are needed!  If these approaches to regulation and restriction are not going to work (because they reduce the Internet to something unusable), then we need a different way to talk about the Internet, the services that run over the Internet, and how we articulate and enact policies that relate to them.  Don’t try to curtail web page access by making laws requiring ISPs to delete entries in DNS, figure out a better way to get international policy to common ground on what makes inappropriate use of the Internet.  (Make the action illegal, not the tool).
  4. And yet, everything old is new again.
    1. The Internet was developed as an inter-network — making a whole out of disparate parts collaborating.
    2. We could not have seen the kind of IPv6 deployment we have today if large, competitive web companies hadn’t stood up to do World IPv6 Day and World IPv6 Launch.
    3. The future is better if we can regain and foster more of that sense of cross-industry collaboration to find solutions that are best for the Internet as a whole.
  5. As I recall saying at an ISOC-DC “Future Internet” panel some years ago, if I could tell you what the Internet was going to be in 2020, it wouldn’t be the Internet, now would it?!  The beauty and power of the Internet is that it is a platform that supports creativity, communication and development for everyone, and we have no means to size the depth and breadth of that much creative energy.  So much of what we now consider “normal” in the Internet — say, FaceBook — was unthinkable until someone thought of it and built it.  If the Internet loses its ability to support that kind of novel development, then it’s not the Internet anymore, it’s just another network.

It was a fun panel — good discussion with the attendees, too.   From the comments that came up during the session, it’s clear that people have very real concerns about where we’re going with the Internet as a platform for “permissionless innovation”, while ensuring that we retain some level of privacy and management of our personal information.

As Mike Nelson said, moderating the session — there’s enough meat in each of the topics we brought up to fuel a semester long university course!