Category Archives: Communication industry

Updates from Ciena

Photonic integration and co-packaging: Design tools for footprint optimization in data center networks
As traffic within and between data centers continues to grow, operators need to constrain the resulting increase in power consumption to minimize operational costs. This is driving the need to manage footprint and power at the system design level. Photonic integration and co-packaging are related approaches to addressing area and power challenges for networking applications.

By Patricia Bower – Data center networks have evolved rapidly over the last couple of years, in large part due to the scalability and flexibility supported by today’s compact modular DCI solutions.  System designers leveraged advances in key foundational technologies to pack significant capacity and service density into these products, and their popularity is growing as these solutions capture new market segments.

The same advances have also paved the way for new consumption models for coherent optical technology in the form of footprint-optimized, pluggable solutions. As traffic growth for server interconnect within data centers continues to increase, greater for interconnect between data centers (DCI) will be required.

Scaling of data center traffic to get more bandwidth adds to the power consumption overhead and real estate requirements for operators which adds to capital and operational costs.

With each new generation of switching platform and coherent optical transport systems, designers have met the challenges by increasing throughput density and reducing power/bit. Both intra-DC and DCI traffic flows will increasingly rely on advances in foundational technologies and system design options to mitigate power consumption while maximizing interconnect densities.

What are these foundational technologies?  They include:

  • Complementary Metal-Oxide Semiconductor (CMOS)
  • Indium phosphide (InP)
  • Silicon photonics (SiPhot)

In networking applications, CMOS is the basis for both high-capacity switch chips used in router platforms and coherent optical digital-signal-processors (DSP).

InP and SiPhot are used to build electo-optical circuits for signal transport over optical fibers.  Together, the DSP and electro-optical components are the heart of coherent optical transport systems. more>

Related>

Updates from Ciena

How coherent technology decisions that start in the lab impact your network
What is the difference between 400G, 600G and 800G coherent solutions? It seems to be obvious, but is it just about maximum wavelength capacity? Why are different baud, modulations or DSP implementations used, and more importantly, what are the networking implications associated with each?
By Helen Xenos – 32QAM, 64QAM, and hybrid modulation….32, 56, 64, now 95Gbaud? Are they really any different? Fixed grid, flex grid, what’s 75GHz? Is your head spinning yet?

Coherent optical technology is a critical element that drives the amount of capacity and high-speed services that can be carried across networks and is a critical element in controlling their cost. But with multiple generations of coherent solutions available and more coming soon, navigating the different choices can be difficult. Unless you are immersed in the details and relationships between bits and symbols, constellations and baud in your everyday life, it can be confusing to understand how the technology choices made in each solution influence overall system performance and network cost.

To clarify these relationships, here is an analogy that helps provide a more intuitive understanding: consider performance-optimized coherent optical transport as analogous to freight transport.

The goal of network providers using coherent is to transport as much capacity as they can, in the most cost-efficient manner that they can, using wavelengths across their installed fiber. This is similar to wanting to be as efficient as possible in freight transport, carrying as much payload as you can using available truck and road resources. more>

Related>

Updates from Ciena

How coherent optics improve capacity, performance and competitiveness for cable MSOs

Cable Multiple System Operators (MSOs) will be using coherent optics in their access networks to help solve a vital business challenge: the need to improve scale and reduce costs while delivering high data rates to end customers.
By Fernando Villarruel – MSOs must build a foundation for their networks that provides the needed capacity, introduces significant operational and cost efficiencies, and positions their businesses for future growth. This growth includes symmetric bandwidth support for the evolution of packet cores to cloud and aggregation of multiple revenue streams including mobile, business services and IoT.

Coherent optics facilitates growth because it enables massive scalability and maximizes network performance while using far fewer components, reducing equipment costs as well as the time and effort it takes to manage the network. These cost and operational benefits allow MSOs to be more competitive as they can place greater attention on delivering a compelling and differentiating customer experience.

Coherent optics employ a technique well known in the cable RF community—QAM, but in optics! This technology uses sophisticated symbol-based modulation scheme with higher baud to efficiently use the optical spectrum available so MSOs can optimize capacity and reach for a given link. With Ciena’s recently announced WaveLogic 5, we will be able to support 800Gb/s in one wavelength, for transport, and up to 200Gb/s in one coherent pluggable wavelength, in access! more>

Related>

Updates from Ciena

Enterprise trends: Cloud, digital transformation, and the move to the Adaptive Network
In today’s digital world, enterprises in every industry vertical must now be hyper-focused on providing a higher quality customer experience, leading to the use of new technologies like AI, IoT, edge computing and others.
By Chip Redden – Before the world went digital, bringing new customers to an enterprise’s marketplace was a pretty straightforward process. Buyers had access to only limited sources of information, allowing the sellers to control the entire journey from discovery to sale to partnership.

Global digitization has changed this dramatically. Buyers have access to almost unlimited information and are entering the sales process well aware of the pluses and minuses of every sales equation, and are very quick to change relationships for a better deal. They are demanding the same type of predictive, personalized, and custom experience they receive from digital innovators like Amazon, Google, and other leaders. And, to add to all this, buyers are asking for service and applications access 24 x 7 from multiple types of devices –including mobile devices.

This means that enterprises in every industry vertical must now be hyper-focused on providing a higher quality customer experience, actively partnering with customers who are willing to advocate in terms of this better experience. It also means that the network that underpins this relationship has to change and adapt.

Enterprises have wholeheartedly embraced the move to cloud architectures and are now actually taking better advantage of the cloud’s abilities. Many enterprises have begun transforming themselves to a more “platform-ized” model by using Application Programming Interfaces (APIs) to bundle applications and services from multiple and sometimes competing vendors and then deliver these bundles through a single platform/portal to the end customer. more>

Related>

Updates from Ciena

SD-WAN Gets (More) Real for Service Providers

By Kevin Wade – Interest in Software-Defined Wide Area Networking (SD-WAN), which is designed to offer enterprises cohesive end-to-end visibility and control of their multi-site WAN, continues to grow. Although SD-WAN was originally envisioned to give enterprises a ‘DIY’ approach to WAN management, most industry analysts and experts agree that managed SD-WAN services are the predominant consumption model for enterprises today, and into the foreseeable future.

The trend toward managed SD-WAN services is good news for service providers, many of which were initially cautious that SD-WAN might reduce their revenues and/or weaken their relationships with key business customers. To the contrary, SD-WAN services have emerged as a rapidly growing new source of revenues, as well as one that offers service providers new opportunities to improve the customer experience.

I’ve been following the SD-WAN movement closely since nearly the beginning, and have been pleased to see some recent developments that signal the increasing maturity of SD-WAN services.

Without a doubt, SD-WAN services are becoming more established and accepted. And while Blue Planet isn’t inherently an SD-WAN solution, the deployment of SD-WAN services is one of Blue Planet’s biggest drivers and most common use cases. more>

Related>

Eye diagrams: The tool for serial data analysis

By Arthur Pini – The eye diagram is a general-purpose tool for analyzing serial digital signals. It shows the effects of vertical noise, horizontal jitter, duty cycle distortion, inter-symbol interference, and crosstalk, all of which can close the “eye.” While engineers have used eye diagrams for decades, oscilloscopes continually get new features that increase its value.

Oscilloscopes form eye diagrams—the separation between the two binary data states “1” and “0”—by overlaying multiple single clock periods on a persistence display. The accumulation shows the history of multiple acquisitions.

Additive noise tends to close the eye vertically while timing jitter and uncertainty closes the eye horizontally. Duty cycle distortion (DCD) and inter-symbol interference (ISI) change the shape of the eye. The channel will fail if the eye closes to the point where the receiver can no longer recognize “0” and “1” states.

In the days of analog oscilloscopes, the eye diagram was formed by triggering the oscilloscope with the serial data clock and acquiring multiple bits over time using a persistence or storage display. This technique adds the trigger uncertainty or trigger jitter to the eye diagram for each acquisition. Digital oscilloscopes form the eye by acquiring very long record with many serial bits.

The clock period is determined, and the waveform is broken up or “sliced” into multiple single-bit acquisitions overlaid in the persistence display. In this way, all the data is acquired with a single value of trigger jitter that’s eliminated by using differential time measurements within the eye. more>

Updates from Ciena

Trouble-to-Resolve: Assure Layer 3 Service Performance in Minutes
By Don Jacob – Service provider networks have come a long way from the flat networks of yesteryear. Today, they are highly complex with multiple hierarchies and layers, while running a plethora of services and technologies. Providers use the same underlying network to cater to different applications, ranging from financial applications to streaming video, each with its own unique performance and fault-tolerance requirements.

In this complex scenario, how can service providers assure performance of their Layer 3 services, to verify that services are being delivered and ensure customer satisfaction? Take the case of a service provider who’s providing MPLS services to hundreds of customers. Let us look at how the network engineer managing a service provider network handles a routing issue without a routing analytics solution.

Today, when a customer raises a ticket for a reachability or service delivery problem, the provider manually analyzes the issue, making their trouble-to-resolve process long and time consuming.

To start with, if the customer raises the trouble ticket while a connectivity issue is in progress, the first thing the provider needs to know is the routing path taken by the service. This requires the network engineer finding the source router and then running a traceroute from that router to determine all the hops along the path. Once the routers along the path have been identified, they would then log into each router to understand its performance.

This process is repeated on all routers along the path until the problematic router or link is identified. more>

Related>

Why economics must go digital

By Diane Coyle – One of the biggest concerns about today’s tech giants is their market power. At least outside China, Google, Facebook, and Amazon dominate online search, social media, and online retail, respectively. And yet economists have largely failed to address these concerns in a coherent way. To help governments and regulators as they struggle to address this market concentration, we must make economics itself more relevant to the digital age.

Digital markets often become highly concentrated, with one dominant firm, because larger players enjoy significant returns to scale. For example, digital platforms incur large upfront development costs but benefit from low marginal costs once the software is written. They gain from network effects, whereby the more users a platform has, the more all users benefit. And data generation plays a self-reinforcing role: more data improves the service, which brings in more users, which generates more data.

To put it bluntly, a digital platform is either large or dead.

As several recent reports (including one to which I contributed) have pointed out, the digital economy poses a problem for competition policy.

Competition is vital for boosting productivity and long-term growth because it drives out inefficient producers and stimulates innovation. Yet how can this happen when there are such dominant players?

Today’s digital behemoths provide services that people want: one recent study estimated that consumers value online search alone at a level equivalent to about half of US median income.

Economists, therefore, need to update their toolkit. Rather than assessing likely short-term trends in specific digital markets, they need to be able to estimate the potential long-term costs implied by the inability of a new rival with better technology or service to unseat the incumbent platform.

This is no easy task because there is no standard methodology for estimating uncertain, non-linear futures. Economists even disagree on how to measure static consumer valuations of free digital goods such as online search and social media.

And although the idea that competition operates dynamically through firms entering and exiting the market dates back at least to Joseph Schumpeter, the standard approach is still to look at competition among similar companies producing similar goods at a point in time. more>

Updates from ITU

Iceland’s data centers are booming—here’s why that’s a problem
By Tryggvi Adalbjornsson – The southwestern tip of Iceland is a barren volcanic peninsula called Reykjanesskagi. It’s home to the twin towns of Keflavik and Njardvik, around 19,000 people, and the country’s main airport.

On the edge of the settlement is a complex of metal-clad buildings belonging to the IT company Advania, each structure roughly the size of an Olympic-size swimming pool. Less than three years ago there were three of them. By April 2018, there were eight. Today there are 10, and the foundations have been laid for an 11th.

This is part of a boom fostered partly by something that Icelanders don’t usually rave about: the weather.

Life on the North Atlantic island tends to be chilly, foggy, and windy, though hard frosts are not common. The annual average temperature in the capital, Reykjavík, is around 41 °F (5 °C), and even when the summer warmth kicks in, the mercury rarely rises above 68. Iceland has realized that even though this climate may not be great for sunning yourself on the beach, it is very favorable to one particular industry: data.

Each one of those Advania buildings in Reykjanesskagi is a large data center, home to thousands of computers. They are constantly crunching away, processing instructions, transmitting data, and mining Bitcoin. Data centers like these generate large amounts of heat and need round-the-clock cooling, which would usually require considerable energy. In Iceland, however, data centers don’t need to constantly run high-powered cooling systems for heat moderation: instead, they can just let in the brisk subarctic air.

Natural cooling like this lowers ongoing costs. more>

Related>

Updates from Ciena

The future of submarine networks. What’s NEXT?
By Brian Lavallée – Submarine cable networks are considered by most people in the know as critical infrastructure, and rightfully so. They’re the jugular veins of intercontinental connectivity that together enable the global Internet and erase vast transoceanic distances. We depend on submarine cable networks each and every day for both personal and business reasons, often without even knowing it, which is a testament to their carrier-grade robustness.

There’s simply no Plan B for submarine cables, which are the size of a common garden hose and are situated in the abysses of oceans the world over. This means that as an industry, we must continually innovate, adopt, and adapt to ensure these submerged engineering marvels continually evolve to meet the ever-changing demands from end-users, both humans and machines.

Spatial Division Multiplexing (SDM) submarine cables, Open Cables, Shannon’s Limit, and the increasing adoption of Artificial Intelligence and Machine Learning are hot topics across the submarine network industry.

Increasingly, technologies borne in data centers and terrestrial networks are finding their way into submarine networks, and that’s a good thing. The network can and should be viewed from end-to-end along the entire service path, overland and undersea, and increasingly, right into data centers. This means uniting technologies and networks across submarine, terrestrial, and data center domains for improved economies of scale due to faster technology innovation cycles.

The Southern Cross Cable Network (SCCN) spans over 30,500km, which includes over 28,900km of submarine cables submerged on the Pacific Ocean seabed. The network is a major internet on-ramp that provides critical communication connectivity among Australia, New Zealand, Hawaii, Fiji, and the US West Coast. It connects North American and Australasia by erasing vast transpacific distances. more>

Related>