Tag Archives: Internet

Updates from Ciena

Pluggables: Their role in coherent technology evolution

By Patricia Bower – In the optical networking industry, pluggable client optical modules are a dominant trend for very short links within buildings and campus networks. Market economics that have driven the proliferation of these pluggables include factors such as simplicity, interoperability and volume-driven cost. And in the domain of short-reach (sub-10km), point-to-point fiber optic connections, the advantages listed above for using small form-factor, pluggable modules shine through.

This is particularly so in the case where transport of high-speed Ethernet client signals is the primary requirement. Connectivity within and between data centers has grown at a very rapid rate over the last few years, both from the perspective of transmission speed and number of connections. The use of optical signaling to transport these high-speed Ethernet signals has proven to be very efficient.

The optical networking industry has a well-established and large ecosystem of vendors bringing small form-factor client modules to market. Many of these are supported by MSAs (Multi-Source Agreements) which can be one of two types; those that define optical transmission specifications and those that define mechanical forms.

More recently, the data rates supported by pluggable form factors have increased.  The 100G Lambda MSA group, of which Ciena is a member, has exhibited live demonstrations of interoperable Ethernet modules from member companies.  The 100G Lambda MSA specifies 100Gb/s over 2km and 10km of single-mode fiber (SMF), and 400 Gb/s links over 2km of SMF.  These modules will be based on the use of PAM-4 coding to get to a data rate of 100Gbps per wavelength. more>

Related>

Updates from Adobe

The Art of the Unnatural
By Brendan Seibel – When he was a kid, Jason DeMarte enjoyed visiting natural history museums to see the dioramas filled with taxidermy wildlife and carefully positioned plants.

As an adult, he determined that those scenes are intended more to capture the imagination than to document reality. And while the diorama designers’ motives may be pure, there is a darker side.

DeMarte saw a correlation between the museums’ “perfect, pristine snippets of nature” and product photography of the sort he’d done for Toys ’R’ Us to pay the bills while earning an MFA in photography. That experience of creating flawless images of merchandise exemplified photography’s role in cultivating consumer desire for false perfection through manipulation and good lighting.

“I started thinking about nature in a different way, as a commodity, as a way of packaging, promoting, and selling a commodifiable object,” DeMarte says.

The disconnect between manufactured perceptions of nature and the imperfect reality has been DeMarte’s artistic focus ever since. His work is a commentary on the artifice underpinning our concept of the world, as well as our constant desire for something “better.” more>

Related>

Updates from ITU

Monitoring our changing planet
By Houlin Zhao – The Earth is a fragile planet with finite resources to sustain the world’s growing population. As we work together to build a sustainable global economy, spaceborne remote sensors are poised to play an increasingly important role in achieving the United Nations’ Sustainable Development Goals (SDGs).

Indeed, ITU Member States and the global community now see the potential for using Earth observations and geospatial information as fundamental inputs for achieving the SDGs. Remote sensing provides critical information across a wide range of applications, including air quality, disaster management, public health, agriculture, water availability, coastal zone management, and the health of the Earth’s ecosystems.

For example, spaceborne sensing data is used to assess the impact of natural disasters and to be better prepared for hazardous events around the globe. Data from spaceborne remote sensors is also increasingly used to guide efforts to minimize the damage that urban growth has on the environment.

These are just a few examples of how remote sensing measurements — and the science they enable — provide a great service to humanity. This edition of the ITU News Magazine provides more such examples and a wealth of insight into how ITU’s work helps realize the social and economic benefits of Earth observation from space. more (pdf)>

Related>

Tools for thinking: Isaiah Berlin’s two concepts of freedom

By Maria Kasmirli – ‘Freedom’ is a powerful word.

We all respond positively to it, and under its banner revolutions have been started, wars have been fought, and political campaigns are continually being waged.

But what exactly do we mean by ‘freedom’?

The fact that politicians of all parties claim to believe in freedom suggests that people don’t always have the same thing in mind when they talk about it.

Might there be different kinds of freedom and, if so, could the different kinds conflict with each other? Could the promotion of one kind of freedom limit another kind? Could people even be coerced in the name of freedom?

The 20th-century political philosopher Isaiah Berlin (1909-97) thought that the answer to both these questions was ‘Yes’, and in his essay ‘Two Concepts of Liberty’ (1958) he distinguished two kinds of freedom (or liberty; Berlin used the words interchangeably), which he called negativefreedom and positive freedom.

Negative freedom is freedom from interference. You are negatively free to the extent that other people do not restrict what you can do. If other people prevent you from doing something, either directly by what they do, or indirectly by supporting social and economic arrangements that disadvantage you, then to that extent they restrict your negative freedom.

Berlin stresses that it is only restrictions imposed by other people that count as limitations of one’s freedom. Restrictions due to natural causes do not count. The fact that I cannot levitate is a physical limitation but not a limitation of my freedom. more>

Updates from Ciena

The implications behind service and content provider requirements for coherent optical solutions
By Helen Xenos – In 2007, I was asked to write a white paper about this really cool new “coherent technology” our team was working on and explain how the coherent receiver would completely revolutionize optical networks as we knew them. As I tried to get started, I quickly learned that the only source for content were the engineers actually working on the project – my efforts of scrolling through pages upon Google search pages netted zero information.
Article
The evolving coherent optical networking landscape: a deep dive

In the end, I wrote the paper by transcribing what a dear co-worker and mentor, Michel Belanger, who was one of the designers, patiently explained to me (it took several hours). He made sure I understood the significance of coherent technology and how it would change the game in optical networks.

Fast forward a dozen years – there is no shortage of information pertaining to coherent technology, and there are about a dozen coherent module and system suppliers. Coherent optical systems have become the networking foundation that underpins the digital economy that we know today.

Network providers are ubiquitously deploying coherent to scale networks for capacity, reduce transport costs and provide a better end-user experience to their customers. In fact, they are now looking at expanding the role that coherent technology plays in the network and deploy it in space and power/optimized applications in addition to traditional infrastructure, submarine and data center interconnect (DCI) build-outs.

As coherent technology plays an increasingly critical role for successful network evolution, we must step back and ask ourselves:

  • What do network providers need from their coherent solution partners to succeed?
  • What are the implications of the divergent customer and networking requirements to the suppliers of the technology?

more>

Related>

Democratising Europe: by taxation or by debt?

Europe desperately needs to resolve its collective-action problem to emerge from the crisis. Democratizing Europe, with a fiscal capacity, is better than monetary easing.
By Manon Boujou, Lucas Chancel, Anne-Laure Delatte, Thomas Piketty, Guillaume Sacriste, Stéphanie Hennette and Antoine Vauchez – On December 10th 2018 we launched a Manifesto for the Democratization of Europe, along with 120 European politicians and academics. Since it was launched, the manifesto has accrued over 110,000 signatures and it is still open for more. It includes a project for a treaty and a budget enabling the countries which so wish to set up a European Assembly and a genuine policy for fiscal, social and environmental justice in Europe—all available multilingually on the website.

In the Guardian, on December 13th, Yanis Varoufakis presented his ‘Green New Deal’ as an alternative to the manifesto, which he considers to be irrelevant.

The Varoufakis plan builds on the European Investment Bank (EIB) which is responsible for issuing bonds to the value of €500 billion per annum, including these securities in the program of purchase of securities by the European Central Bank (ECB).

The main criticism by Varoufakis seems to be the following: why do you want to create yet more new taxes when one can create money? Our budget is indeed financed by taxation, whereas his plan is financed by public debt.

In his proposals, private firms involved in the ecological transition borrow money from the ECB, after having been selected by the EIB.

In fact, part of this arrangement already exists in the form of the Juncker plan. What Varoufakis adds is the purchase of securities by the ECB rather than by private investors. more>

Artificial Intelligence and the Future of Humans

Experts say the rise of artificial intelligence will make most people better off over the next decade, but many have concerns about how advances in AI will affect what it means to be human, to be productive and to exercise free will.
By Janna Anderson, Lee Rainie and Alex Luchsinger – Digital life is augmenting human capacities and disrupting eons-old human activities. Code-driven systems have spread to more than half of the world’s inhabitants in ambient information and connectivity, offering previously unimagined opportunities and unprecedented threats.

As emerging algorithm-driven artificial intelligence (AI) continues to spread, will people be better off than they are today?

Some 979 technology pioneers, innovators, developers, business and policy leaders, researchers and activists answered this question in a canvassing of experts conducted in the summer of 2018.

The experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities. They spoke of the wide-ranging possibilities; that computers might match or even exceed human intelligence and capabilities on tasks such as complex decision-making, reasoning and learning, sophisticated analytics and pattern recognition, visual acuity, speech recognition and language translation. They said “smart” systems in communities, in vehicles, in buildings and utilities, on farms and in business processes will save time, money and lives and offer opportunities for individuals to enjoy a more-customized future.

Yet, most experts, regardless of whether they are optimistic or not, expressed concerns about the long-term impact of these new tools on the essential elements of being human. more>

Updates from Siemens

Why fulfilling airworthiness requirements means going digital
By Dave Chan and John Cunneen – Any organization that must consistently prove airworthiness requirements can relate to the frustrating tasks of locating and providing proof their products will perform in accordance with standards, rules and laws in a myriad of countries.

No more so is this appropriate than in the aerospace industry where everything is built on safety. Every rule, every design requirement has blood on it. These rules exist because someone was or can be hurt, a plane could crash, or any number of catastrophic incidents can occur.

This is why there are rigorous standards in place to ensure anything that can take off and land, from the smallest glider and helicopter to the largest commercial airliner and military jet, must receive and maintain an airworthiness certificate. The process of aircraft certification can be daunting simply because many organizations don’t take proactive approaches in the development phase through delivery to make it so. more>

Related>

Updates from ITU

Time to eliminate the password: New report on next-generation authentication for digital financial services
By ITU News – “We don’t want digital financial services to be built on the wrong foundation, which is the password,” says Abbie Barbir, Rapporteur for ITU standardization work on ‘Identity management architecture and mechanisms’ (Q10/17).

Over 3 billion usernames and passwords were stolen in 2016, and the number of data breaches in 2017 rose 44.7 per cent higher than that recorded in 2016.

“We are moving away from the ‘shared secret’ model of authentication,” says digital ID strategist and standards expert, Andrew Hughes of InTurn Consulting, referring principally to the username-password model of authentication.

“Considering the prevalence of data breaches, there are no secrets anymore,” says Hughes.

Designed to overcome the limitations of passwords, specifications developed by the FIDO Alliance (‘Fast Identity Online’) enable users to authenticate locally to their device using biometrics, with the device then authenticating the user online with public key cryptography.

This model is not susceptible to phishing, man-in-the-middle attacks or other forms of attacks targeting user credentials.

“This is the biggest transformation we have seen in authentication in 20 years,” says Jeremy Grant, Managing Director of Technology Business Strategy at Venable. more>

Related>

Updates from Ciena

4 critical requirements for the next-gen photonic layer
By Paulina Gomez – Today’s market dynamics are making it harder for network providers to effectively compete in an environment where revenue per bit is declining, and network bandwidth requirements are exploding. In the face of these business challenges, network providers are realizing they must evolve and transform their networks towards a more programmable infrastructure that can scale and respond on demand, to meet changing customer expectations and unpredictable traffic requirements.

While coherent optics are a critical element in enabling a programmable optical infrastructure, alone they are not enough to fulfill operators’ requirements for successful network transformation.

So what else is needed?

The photonic layer is the foundation of this programmable infrastructure, leveraging the latest coherent optical technology to deliver maximum scale at the lowest cost per bit. When examining the requirements of metro and long-haul infrastructure applications, including global data center interconnect (DCI) networks, there is a growing need for an agile, resilient and intelligent photonic layer.

This Reconfigurable Add-Drop Multiplexer (ROADM)-based optical foundation leverages flexible, instrumented photonics and Layer 0 software control to scale the network for maximum capacity at the lowest space, power, and cost per bit. more>

Related>