The Future of Internet Development

Today, I want to share with you something that I’ve been working on for the last several months — a concrete vision and proposal for supporting the Internet’s development.

CCDI-Spelled-Out-20150715

For some, “Internet development” is about building out more networks in under-served parts of the world. For others, myself included, it has always included a component of evolving the technology itself, finding answers to age-old or just-discovered limitations and improving the state of the art of the functioning, deployed Internet.  In either case, development means getting beyond the status quo.  And, for the Internet, the status quo means stagnation, and stagnation means death.

Twenty-odd years ago, when I first got involved in Internet technology development, it was clear that the technology was evolving dynamically.  Engineers got together regularly to work out next steps large and small — incremental improvements were important, but people were not afraid to think of and tackle the larger questions of the Internet’s future.  And the engineers who got together were the ones that would go home to their respective companies and implement the agreed on changes within their products and networks.

Time passes, things change.  As an important underlay to the world’s day to day activities, a common perspective of the best “future Internet” is — hopefully as good as today’s, but maybe faster.   And, many of the engineers have gone on to better things, or management positions.  Companies are typically larger, shareholders a little more keen on stability, and engineers are less able to go home to their companies and just implement new things.

If we want something other than “current course and speed” for the Internet’s development, I believe we need to put some thoughtful, active effort into rebuilding that sense of collaborative empowerment for the exploration of solutions to old problems and development of new directions — but taking into account and working with the business drivers of today’s Internet.

Clearly, it can be done, at least for specific issue — I give you World IPv6 Launch.

Apart from that, what kinds of issues need tackling?  Well, near term issues include routing security as well as fostering measurements and analysis of the currently deployed network.  Longer term issues can include things like dealing with rights — in handling personal information (privacy) as well as created content.

I don’t think it requires magic.   It might involve more than one plan — since there never is a single right answer or one size that fits all for the Internet.  But, mostly, I think it involves careful fostering, technical leadership, and general facilitation of collaboration and cooperation on real live Internet-touching activities.

I’m not just waving my hands around and writing pretty words in a blog post.  Earlier this year, I invited a number of operators to come talk about an Unwedging Routing Security Activity, and in April, we had a meeting to discuss possibilities and particulars.  You can out more about the activity, including a report from the meeting here.

That was a proof point for the more general idea of this “coordination” function I described above — for now, let’s call it the Centre for the Creative Development of the Internet, and you can read more about that here:  http://ccdi.thinkingcat.com/ .

In brief, I believe it’s possible to put together concrete activities that will move the Internet forward, that can be sustained by support from individual companies that have an interest in finding a collaborative solution to a problem that faces them.  The URSA work is a first step and a proof point.

Now the hard part:  this is not a launch, because while  the idea is there, it’s not funded yet.  I am actively pursuing ways to get it kick started, with to be able to make longer term commitments to needed resources, and get the idea out of the lab and working with Internet actors.

If you have thoughts or suggestions, I’m happy to hear them — ldaigle@thinkingcat.com .   Even if it’s just a suggestion for a better name :^)  .

And, if we’re lucky, the future of Internet development will mirror some of its past, embracing new challenges with creative, collaborative solutions.

Never. Stop. Learning.

Last week I had the privilege of participating in the Norwich University College of Graduate and Continuing Studies 2015 Residency Conference.    NU runs several online graduate programs, which have the requirement of spending one week (at the end of the program) on campus in Northfield, VT.

As a member of the advisory board for the highly-rated Master of Science, Information Security and Assurance (MSISA) program, I find it interesting to meet the students and see the breadth of backgrounds, interests and future plans that they have.  Online university programs are real, concrete, and provide access to education that would otherwise be very difficult for working professionals to accommodate in their busy lives.

The Residency Conference is the icing on the cake as students have the opportunity to meet each other and some of the professors they’ve been working with throughout their program.

And, really, who could resist the opportunity to spend a week in Vermont?

Not I, clearly — this was my second opportunity to participate in the annual conference.  Last year, I gave an introduction to Internet governance, among other things.

This time, in addition to helping with facilitating case study discussion at the conference, I had the opportunity to participate by hooding in the Academic Recognition Ceremony for the MSISA and MPA programs — a fine opportunity to wear academic regalia!  The school’s photographer caught some pics, so there is evidence!

I’ve long been convinced of the importance of continued learning — always keep stretching, doing, learning.  Last week’s Residency Conference energy was a reminder that formal education, whether in person or online, can be an even more intense and rewarding experience than self-directed learning.

Making or Breaking the Internet: Policy choices

GCIG_Paper_No7-frontpiece

A visible product of my “self-funded sabbatical” is now published!

On the Nature of the Internet, by Leslie Daigle

My aim and hope is that it will provide some further insight into what not to do to the Internet intentionally or inadvertently,  so that collectively we can agree on the need to find better ways of dealing with the very real policy issues that need to have solutions.

The Internet has proven itself highly accommodating of change over the decades — today’s Internet looks nothing like the network of networks that existed 25 years ago, when commercial traffic was still prohibited from traversing it.  But, most of the changes that it has faced have come from technological or direct usage issues.  In today’s reality, many of the forces at play on the Internet are direct or indirect outcomes of (government and regulatory) policy choices.

If we want to continue to have a healthy and evolving Internet, we need to learn how to make policies that are consistent with, or at least not antithetical to, what makes the Internet work.

So, when I was asked last year to write a paper on the nature of the Internet for the Global Commission on Internet Governance, I turned first to the work we’d done at the Internet Society on the “Invariant Properties” that are true of the healthy Internet.   In the paper that I wrote for GCIG, now published as their 7th paper for this commission, I tackled the questions of policy choices that are driving us towards national networks and localized abuse of Internet infrastructure, through the lens of those 8 invariant properties of Internet health.

Here’s the executive summary:

This paper examines three aspects of the nature of the Internet: the Internet’s technology, general properties that make the Internet successful and current pressures for change. Current policy choices can, literally, make or break the Internet’s future. By understanding the Internet — primarily in terms of its key properties for success, which have been unchanged since its inception — policy makers will be empowered to make thoughtful choices in response to the pressures outlined here, as well as new matters arising.

Have a read of the paper, and let me know what you think — other examples of policy driving us in the wrong direction?  New approaches to policy-making that will help us solve problems and have a healthy Internet?  I’d love to hear your perspective, and — more importantly — see a broader discussion develop around different perspectives.

Applications Architecture: it’s not just a spaghetti diagram of protocols

Those of you who track the announcements of IETF Internet-Draft publications may have noticed a “draft-daigle-” document pop out in the flurry last-minute pre-IETF92 documents.  (ICYMI, the document is:  draft-daigle-AppIdArch-00.txt).

Related to the work I’ve been doing in bolstering the content of “The Attic” of applications identifier technology history, I started to think about a general framework to describe applications identifiers.  So many times we’ve been through the same design discussions — it would be nice to capture the state of the art in tradeoffs and design considerations and simply move forward.

Future versions of this or other documents are intended to delve more deeply into questions of design choices, as well as the broader question of applications architectures (which are uniquely tied to identifiers, content, and resolution).

That’s the theory behind the draft.  It is an “-00” version, with all the draftiness that implies.  My hope is that it will stimulate some discussion and feedback.  I’d love to hear your thoughts — comment here, send me an e-mail, or catch me in Dallas at IETF92.

@USCongress: Thanks for making my point!

The long-standing and generally-held belief of the Internet community has been that the Internet’s governance should be based on a “multistakeholder” model.  Whatever you may surmise is the proper definition of that word, we should readily agree it doesn’t mean that a single government, any single government, should have override control of major swaths of the Internet or its support functions.

This year, there have been constructive community steps towards reducing the Internet’s dependency on a single nation, as well as a variety of reminders why it is important to make that effort successful.

Hence the global satisfaction at the NTIA’s announcement in March 2014 that it would seek a multistakeholder-model-supporting proposal to transition the NTIA (and the US government) out of its oversight role for the Internet Assigned Numbers Authority (IANA).    For many, this has been a long time coming — certainly, the Internet Architecture Board, on behalf of the IETF, has been signalling (to the NTIA, publicly) its concerns about the IETF’s lack of control over its own standards’ parameter assignment since at least the days when I was the IAB Chair.

The communities that have actual responsibility for managing the names, numbers, and other protocol parameters have, since March, stepped up to engage in developing the pieces of the requested proposal.   These are not random strawperson proposals to define a new Internet or governance system:  the communities involved are dependent on the IANA function for getting their own work done, and the focus has been on ensuring that the Internet’s naming, numbering and protocol development functions will continue to work reliably, responsibly and without undue interference in a post-NTIA-transition world.

For the protocol parameters part of IANA, the IETF’s IANAPLAN working group was chartered “to produce an IETF consensus document that describes the expected interaction between the IETF and the operator of IETF protocol parameters registries.”    From my vantage point as co-chair of the WG, I have seen the WG’s extensive discussion of the issues at hand, and watched the document editors do a sterling job of producing a document that will be the basis of the IETF’s contribution to the proposal.  With the WG’s document in last call across the IETF until the 15th of December (err, today!), the IETF is on track to have its contribution done by the January 15th deadline set by the inter-communittee coordinating committee.  (See IETF Chair Jari Arkko’s blog post for more details).

Just in case anyone’s energy was flagging before we finish the final details, there are timely reminders of why it is important to keep pressing on with defining (and realizing) the IANA in a post-NTIA reality.  As noted in Paul Rosenzweig’s article on Lawfare, “Congress Tries To Stop the IANA Transition — But Does It?”, a different part of the US government (the US Congress) is trying to stop the NTIA’s actions:

“Now Congress has intervened.  In the Omnibus spending bill that looks to be going through Congress this week the following language appears:

SEC. 540. (a) None of the funds made available by this Act may be used to relinquish the responsibility of the National Telecommunications and Information Administration during fiscal year 2015 with respect to Internet domain name system functions, including responsibility with respect to the authoritative root zone file and the Internet Assigned Numbers Authority functions.

(b) Subsection (a) of this section shall expire on September 30, 2015.”

Rosenzweig goes on to observe that the provision may well not have the expected impact, and might have more deleterious effects for the US.  Perhaps this is Congress attempting to use the budget process to stop the NTIA’s actions in their tracks; perhaps it’s just budget-jockeying on a scale not comprehended outside the limits of Washington, D.C.  But — It.Doesn’t.Matter.

Most of the Internet’s users do not live in the country in question, let alone have a voice in those discussions.  Nevertheless, they are impacted by the outcome.   Which is why the Internet community, which is global, and has solicited input broadly, is stepping up to create a future for the IANA that will:

  • Support and enhance the multistakeholder model;
  • Maintain the security, stability, and resiliency of the Internet DNS;
  • Meet the needs and expectation of the global customers and partners of the IANA services; and,
  • Maintain the openness of the Internet.

Clearly, that can not be satisfied with the control of any single government, as the US Congress’s actions remind us now!    The question is not whether the US government retains its historical role as contract-holder for the IANA functions.  The question is how to best meet the criteria thoughtfully laid out by the NTIA.

Post Wordle

 

Internet… Impossible!

Today is the official launch of a new ThinkingCat Enterprises project — InternetImpossible.   The purpose of the project is to capture, share, and raise awareness of  the many and varied wonders of the Internet. This ranges from its technology to its reach and its impact. Impact is noted on people, on cultures, on ways of doing things.

It’s a storybook.  And, like all good storybooks, it has lessons, or at least valuable learnings that should be remembered and shared.   The Internet is, in some ways, being taken for granted. Along with that ease and familiarity comes an increase in efforts to apply existing norms, processes and problem solving approaches.   So take a moment to review the stories.  Come back to read new ones.  And, if you’ve got a great story about how the Internet is impossible, or has enabled you to do something impossible, please share!  (Send an e-mail to “editor” at “internetimpossible.org”).

That’s it.  Why are you still here? 😉  Go check out http://www.internetimpossible.org .

IMG_4939-small

Web encryption — it’s not just for e-commerce, anymore.

Yesterday, I re-tweeted Cloudflare’s announcement that they are providing universal SSL for their customers. [1]   I believe the announcement is a valuable one for the state of the open Internet for a couple of reasons:

First, there is the obvious — they are doubling the number of websites on the Internet that support encrypted connections.    And, hopefully, that will prompt even more sites/hosting providers/CDNs to get serious about supporting encryption, too.    Web encryption — it’s not just for e-commerce, anymore.

Second, and no less important, is the way that the announcement articulates and shares their organizational thought processes.  They are pretty clear that this is not a decision made to immediately and positively impact their bottom line of business.  It’s about better browsing, and a better Internet in the long run is better business.  And, they are also pretty open about the challenges they face, operationally, to achieve this.    That’s another thing that can be helpful to other organizations contemplating the plunge to support SSL.

So, go ahead and have a read of their detailed announcement — and please forget to come back and check if this website supports encrypted connections.   It does not :-/   (yet).  I’ve added it to my IT todo list — right after dealing with some issues in my e-mail infrastructure.  I asked the head of IT for a timeline on that, and she just gave me a tail-flick and a paw-wash in response.  Life as a micro-enterprise.

More substantially, I could easily become a Cloudflare customer and thus enable encryption up to the Cloudflare servers.  But, proper end-to-end encryption requires my site to have a certificate, based on a unique IP address for this website and the going rate for that, given where my site is, is $6/mo.   That adds, substantially, to the cost of supporting a website, especially when you might have several of them kicking around for different purposes.

There’s work to be done yet in the whole security system (economics) model, it seems to me.    Open discussion of practical issues and eventual work arounds does seem like a good starting place, though.

 

[1] http://blog.cloudflare.com/introducing-universal-ssl/

Toys in “The Attic”: C15N

Time to introduce a new feature on the ThinkingCat site:  The Attic.

It is with chagrin that I acknowledge that I am an old enough <fill in appropriate but not-too-abusive-please ephithet> that many hot new technology standards discussions are ringing in resonance with the long, hard exercises I recall from years past.   In particular, many of the discussions around “information centric networking”, “named data networking”, and new ways to handle intellectual property rights intended for digital media are working through similar problem spaces.  When is a resource “the same” enough to be the same?  Et cetera.

From my perspective, there was a vibrant community discussion of those issues in the heyday of standardization of Uniform Resource Identifiers at the IETF in the 1990’s and early 2000’s.   There was a small core of that community that really wanted to push URIs to be more than just “web addresses”, and saw an application infrastructure standards roadmap.  That roadmap never got implemented — at some point we acknowledged that the implementing community was not as keen, and there’s no fun in defining standards that never get used.

I would like to believe that the ICN and other groups have the implementors with them, and enough interest in the outcome to solve some of these issues that are being revisited.  It would also be useful if we could somehow short-circuit the learning curve, and not tread through all the same sequences.

Perhaps that is a vain hope, but it is the spirit with which I offer “The Attic” — a place where I intend to post up various remnants of those discussions, as culled from my spotty archives (driven by my even spottier recollection).

Today’s  inaugural contribution is on “Contextualized (URI) Resolution — C15N” (C15N because there are 15 letters between “c” and “n” in “contextualization”… get it?  Hey, I didn’t say the humour aged well).  That work never got beyond the BoF stage at the IETF, but the same questions arise when we look at any kind of advanced information resolution.

This is an experiment.  If nothing else, I’ll have a somewhat organized version of my own archive when I’m done 😉    But, if you find this useful, let me know — I’ll be more motivated to add to it.  If you have suggestions — of content or format, I’d also be happy to know.  Feel free to leave a comment here, or email me (I’m “ldaigle” at this site’s domain name).

P.S.:  Apologies to Twitter followers for the double-tweet of the last posting.   I had set up an app to auto-tweet my blog posts here, because automation is So!Cool! and then decided I’d really rather handcraft my tweets — authenticity is important to me.  Apparently, I failed to stomp adequately on the auto-tweeter app.  More stomping has been applied — let’s see if this works better.   My Twitter account is, after all, my1regret …

Internet Governance — When Worlds Collide

“Internet governance” is one of those catchy phrases that people bandy about with the knowing assurance that everyone knows what is under discussion — or with a view to ensuring that crispness and clarity remain elusive.   The Internet is not random, nor even particularly chaotic:  there have been elements of Internet governance since the inception of the network.

The reality is that governance (as in management) of the Internet has existed and evolved to meet the needs of the Internet as it has developed over the last four and a half decades.  This started with the need to have (open) standards for interoperable networking and agreed norms for acquiring and using parameters in those protocols.  It evolved as availability of some of those parameters (IPv4 addresses) was inadequate for expected needs, especially given the original sizes of grants in allocation.

Even before the “g” in “Governance” started being capitalized,  the Internet community organized itself to have a global, yet regionalized, system for open development of formally implemented policies for management of IP address allocation.  Let me say that more directly.  Problem:  handing out chunks of address space was wasteful and leading to rapid runout of IPv4 addresses.  Solution:  the Internet community built bottom-up, open policy development institutions to manage the equitable allocation of the addresses that remained.  That worked so well that the deployment of the successor protocol with a massive address space (IPv6) was deferred for a decade.

While this approach to identifying and addressing problems for the Internet has worked well for those involved in developing the Internet, it’s not such a comfortable (recognizable, formal, predictable, <fill in the blank as you like>) for those who are on the outside looking in.  And those are the people who are increasingly impacted by the Internet and its use:  governments, law enforcement agencies, other businesses.  These worlds are colliding.

Tussle -- Worlds Collide, Internet Governance

I explored that concept and others when, in June,  I  gave a lecture for the Norwich University Residency Week conference.   I’ve posted my slides for the talk on my Publications  page (See: 20140618-NorwichResidencyWeekInternetGovernance-cc).

The 3 key concepts of the presentation were:

  1. Internet governance sparks fly when worldviews collide — as described above.
  2. The Internet knows no physical boundaries — it wasn’t built with a view to following national or jurisdictional boundaries.  Imposing rules and regulations on it forces an unnatural network topology with unhealthy side effects
  3. Internet governance should not only be about regulating technology and its use — for example, solving issues with abuse of “intellectual property rights” is more about getting agreement on what intellectual property is and how it should be handled than it is anything to do with networking.

As alluded to above, the definition of Internet governance (or Governance) has evolved over time.

  1. Making the Internet work through responsible construction and sharing
    • Original definition
    • Still see sparks of it – collaborative discussion of best paths forward in network architecting and operation
  2. Code for “management of critical Internet resources on a global basis”
    • International struggle to control the domain name system and/or IP addresses
    • Can the US pull the plug on a country’s Internet?
      • No
      • Country code domain name (e.g., .br for Brazil) relies on the DNS root zone file
  3. Physical world governance meeting and incorporating the Internet and its uses
    • As the Internet becomes increasingly part of our lives, it’s hard to separate “governance of the population” from the Internet

The Internet was not designed as a single-purpose, coherent network – it doesn’t even notice national boundaries.  That, in fact, is what gives us much of what we love about it.    So, increasing regulation of the wrong things could break what we love.

  • Forcing networks to line up on national boundaries
  • Regulating the Internet when really it’s some service that you wanted to focus on (e.g., “telephony”)

At the same time, there are key issues that need regulation in order
to foster an orderly future for all.  So, we all need to address the tussles when worlds collide, and figure out how to do it right.

Internet governance and… you

Internet Governance — if you’re into it, you’re all over it.  If you’re not into it, you probably think it’s somebody else’s problem.   But, the issue with that thinking is that governance (note the small “g”) of the Internet was specifically designed to be the business of everyone who uses and builds it.   The further away we get from that mentality, the more the Internet becomes an industry-driven product and not an inter-network.

Such was the message I delivered when, in June, I gave a keynote lecture to introduce the graduating class of MSISA (Master of Science in Information Security & Assurance) at Norwich University to the rudiments of Internet governance.  I’ve posted my slides for the talk on my Publications  page (See: 20140617-NorwichResidencyWeek-MSISA-cc).

The Internet has so infiltrated our daily lives that it is changing how we go about many aspects of our non-digital lives.  Just imagine trying to buy a house, the most physically-rooted, tangible object many of us aspire to owning, without having the resources of the World Wide Web.  It’s not just the realty sites — you probably also want to review the local schools, perhaps check out the social and civic activities in the area, and generally inform yourself with what people who live there have to say.

Many of those resources are available because the Internet allows “innovation without permission“.  Concerned citizens and enthusiastic locals who never would have thought of themselves as “content publishers” can readily set up information resources.  (Seriously — I can check out the food safety inspection report for our local grocery store online).   Of course, the World Wide Web itself is the poster child for the value of allowing innovation (on the Internet) without requiring permission.

The Internet’s management, or governance, has grown up over the decades of its existence.  No longer uniquely the purview of a handful of (primarily US) researchers, the Internet’s developers/deployers/users have set up open institutions to engage successive generations of Internet supporters in the process of thoughtful management of its resources.

Understanding the impact and value of the existing institutions, as well as ensuring the Internet’s users don’t become a simple “audience” to its services, are key challenges of evolving the Internet’s governance in the face of today’s political pressures.