Category Archives: Communication industry

The information arms race can’t be won, but we have to keep fighting

By Cailin O’Connor -Arms races happen when two sides of a conflict escalate in a series of ever-changing moves intended to outwit the opponent. In biology, a classic example comes from cheetahs and gazelles. Over time, these species have evolved for speed, each responding to the other’s adaptations.

One hallmark of an arms race is that, at the end, the participants are often just where they started. Sometimes, the cheetah catches its prey, and sometimes the gazelle escapes. Neither wins the race because, as one gets better, so does its opponent. And, along the way, each side expends a great deal of effort. Still, at any point, the only thing that makes sense is to keep escalating.

Arms races happen in the human world too. The term arms race, of course, comes from countries at war who literally amass ever-more sophisticated and powerful weapons. But some human arms races are more subtle.

As detailed in the Mueller report – but widely known before – in the lead-up to the 2016 presidential election in the United States, the Russian government (via a group called the Internet Research Agency) engaged in large-scale efforts to influence voters, and to polarize the US public. In the wake of this campaign, social-media sites and research groups have scrambled to protect the US public from misinformation on social media.

What is important to recognize about such a situation is that whatever tactics are working now won’t work for long. The other side will adapt. In particular, we cannot expect to be able to put a set of detection algorithms in place and be done with it. Whatever efforts social-media sites make to root out pernicious actors will regularly become obsolete.

The same is true for our individual attempts to identify and avoid misinformation. Since the 2016 US election, ‘fake news’ has been widely discussed and analyzed. And many social-media users have become more savvy about identifying sites mimicking traditional news sources. But the same users might not be as savvy, for example, about sleek conspiracy theory videos going viral on YouTube, or about deep fakes – expertly altered images and videos.

What makes this problem particularly thorny is that internet media changes at dizzying speed. more>

Updates from Ciena

Latest trends in optical networks- straight from NGON & DCI World
By Helen Xenos – “If you are not sitting at the edge of your seat, you are taking up too much space.”

I heard this quote from a friend recently and thought it was interestingly appropriate in describing the optical networking industry these days. No one has time to sit back. Technology is evolving at an incredibly fast pace, new developments are occurring at a regular cadence, and network providers are regularly evaluating different architecture approaches for evolving their networks.

In attending the 21st Annual NGON & DCI World event in beautiful Nice last week, I had an opportunity to get a pulse on the latest topics and trends that are driving change in the optical networking landscape.

A popular topic at all optical events – and NGON was no exception – is the discussion of the next technology breakthrough that will bring new levels of capacity scale and cost reduction to transport networks.

If we look at coherent optical shipments, capacity and average selling price data over the past decade, what is the principal way that network providers have been able to keep up with exponentially increasing bandwidth demands while maintaining transport costs relatively flat? Through coherent technology innovations that have enabled higher throughput at less cost.

So, how will we get to the next level of cost reduction?

The consistent response to this question in multiple sessions at NGON was higher baud, which means coherent optical solutions that have a higher symbol rate and can process more data per second, resulting in more fiber capacity with less equipment. more>

Related>

Updates from Ciena

Reimagining Ethernet Service Delivery with Intelligent Automation
By Thomas DiMicelli – Communications service providers introduced Ethernet-based services almost 20 years ago as a more flexible and cost-effective alternative to TDM-based services. These services have been continuously enhanced over the years and are widely deployed today; however, traditional Ethernet service activation processes are increasingly out of alignment with market requirements.

I asked Andreas Uys, CTO at Dark Fibre Africa (DFA), an innovative open-access fibre optic company that operates in South Africa, to outline some of the issues concerning Ethernet service activation, and how CSPs can overcome them.

“The limitations of traditional Ethernet service activation processes are quite significant,” Andreas said. “Some of this is due to the way SPs are organized, and some is due to the reliance on manual operations; taken together, these issues dramatically slow the order to service process and delay time to revenue.”

Andreas continued: “Ethernet service activation naturally involves different departments… customer service reps generate work-orders, engineering designs the services, and the operations team provisions and manages the services. Each department has its own ‘siloed’ systems and rely on emails and spreadsheets to track workflow progress. This results in a time-consuming process, even to generate a simple quote.”

“Engineers design the service using stale data from multiple offline inventory systems,” Andreas added, “which results in non-optimal designs that waste network resources. Similarly, the operations team uses multiple tools to manually configure each element or domain in the service path, which adds cost and the potential for errors into the process.”

With fragmented systems and workflows, offline service design tools and error-prone manual provisioning, it is clear that the Ethernet service activation process needs to be updated. So what is the way forward? more>

Related>

Updates from Ciena

GeoMesh Extreme: Release the Kraken!
importance to global communications, and how Ciena’s new GeoMesh Extreme allows submarine cable operators to integrate several technology advancements to enable an open submarine network solution with greater choice and performance.
By Brian Lavallée – Now that I’ve got your attention, what exactly is a kraken?

It’s a legendary sea monster that terrorized ships that sailed the North Atlantic Ocean. It was an unknown danger that dwelled the ocean deep and could attack without warning resulting in untold mayhem.

Whether the kraken legend originates from a giant squid or octopus sighting is debatable, but it terrorized sailors nonetheless, as they never knew if or when the kraken could be encountered. Legends die hard, but there are real dangers that lurk beneath the oceans of the world, and this is precisely where submarine cables live and work.

Hundreds of years ago, when the kraken was terrifying sailors crisscrossing the world’s oceans, ships were the only method of sharing information between continents that were separated by thousands of kilometers of water. This was until the first reliable transoceanic submarine cable was established over 150 years ago, way back in 1866.

This pioneering telegraph cable transmitted at rates that we’d scoff at today, but it was undoubtedly a monumental performance leap when compared to sending handwritten letters back and forth between continents, which could take weeks and even months. Imagine you waited months to receive an important letter, but couldn’t read the sender’s handwriting?! Oh, the horror!

Most modern submarine cables are based on coherent optical transmission technology, which enables colossal capacity improvements over the early telegraph cables of yesteryear, and can reliably carry multiple terabits of data each second.

We’ve come a long way in improving on how much data we can cram into these optical fibers that are the size of a human hair, housed in cables the size of a common garden hose, and laid upon the world’s seabeds for thousands of kilometers. We’ve also come a long way in being utterly and completely dependent upon this critical infrastructure, now carrying $10 trillion – yes, TRILLION – worth of transactions every day, over 95% of all inter-continental traffic, and are experiencing over 40% CAGR growth worldwide.

This network infrastructure will become more critical, if that’s even possible! more>

Related>

Updates from ITU

How AI can improve agriculture for better food security
ITU News – Roughly half of the 821 million people considered hungry by the United Nations are those who dedicate their lives to producing food for others: farmers.

This is largely attributed to the vulnerability of farmers to agricultural risks, such as extreme weather, conflict, and market shocks.

Smallholder farmers, who produce some 60-70% of the world’s food, are particularly vulnerable to risks and food insecurity.

Emerging technologies such as Artificial Intelligence (AI), however, have been particularly promising in tackling challenges such as lack of expertise, climate change, resource optimization and consumer trust.

AI assistance can, for instance, enable smallholder farmers in Africa to more effectively address scourges such as viruses and the fall armyworm that have plagued the region over the last 40 years despite extensive investment, said David Hughes, Co-Founder of PlantVillage and Assistant Professor at Penn State University at a session on AI for Agriculture at last week’s AI for Good Global Summit. more>

Related>

Updates from Ciena

Avoid outage outrage: Why AI-assisted operations is the next big thing for networks
By Kailem Anderson – You can’t go far in the broader tech industry these days without coming across a conversation about the future of artificial intelligence (AI). I’ve been talking about AI and machine learning (ML) in telecom networks for quite a while, and for good reason: network operators desperately want and need AI to help simplify their complex network operations.

Troubleshooting and resolving issues in today’s increasingly complex and dynamic networks has become a major operational burden, complicated by multiple management systems and a flood of raw network data and alarms.

This results in two major challenges. First, the flood of raw data obfuscates true insight into the state of the network, making it difficult to detect indications of potential network outages before customers have been impacted. Second, the “trouble-to-resolve” process becomes slow and tedious as the team struggles to identify, isolate, and rectify the issue’s root cause.

These challenges can result in network troubles that last for weeks or even months. In North America alone, an IHS Markit report from 2016 estimated that network outages cost enterprises $700 billion a year in lost revenues and productivity.

To address these challenges, Blue Planet has today introduced a comprehensive solution built around AI and advanced analytics. more>

Related>

How European telcos are monitoring our online activity

The security and privacy of personal data are being jeopardized as Deep Packet Inspection is deployed by internet service providers.
By Katherine Barnett – Europe has not escaped the global move towards ‘surveillance capitalism’. Numerous pieces of legislation are under consideration which put online freedoms and privacy at risk—the UK’s Online Harms white paper is just one example.

The European Digital Rights (EDRi) organization recently discovered that European telcos were monitoring internet connections and traffic through a technique known as Deep Packet Inspection (DPI).

European telcos have so far escaped penalization for their use of DPI, on the grounds that it counts as ‘traffic management’. Under current net-neutrality law, it is technically allowed for purposes of network optimization—but its use for commercial or surveillance purposes is banned.

In January, however, the EDRi produced a report, outlining how as many as 186 European ISPs had been violating this constraint, using DPI to affect the pricing of certain data packages and to slow down internet services running over-capacity. Alongside 45 other NGOs and academics, it is pushing for the use of DPI to be terminated, having sent an open letter to EU authorities warning of the dangers.

Deep Packet Inspection is a method of inspecting traffic sent across a user’s network. It allows an ISP to see the contents of unencrypted data packets and grants it the ability to reroute or block traffic.

Data packets sent over a network are conventionally filtered by examining the ‘header’ of each packet, meaning the content of data traveling over the network remains private. They work like letters, with simple packet filtering allowing ISPs to see only the ‘address’ on the envelope but not the contents.

DPI however gives ISPs the ability to ‘open the envelope’ and view the contents of data packets. It can also be used to block or completely reroute data.

Regulators have so far turned a blind eye to this blatant disregard for net-neutrality law and telcos are pushing for DPI to be fully legalized.

This sparks major concerns about user privacy and security, as DPI renders visible all unencrypted data sent across a user’s connection, allowing ISPs to see browsing activity. more>

The unlikely origins of USB, the port that changed everything

By Joel Johnson – In the olden days, plugging something into your computer—a mouse, a printer, a hard drive—required a zoo of cables.

If you’ve never heard of those things, and if you have, thank USB.

When it was first released in 1996, the idea was right there in the first phrase: Universal Serial Bus. And to be universal, it had to just work. “The technology that we were replacing, like serial ports, parallel ports, the mouse and keyboard ports, they all required a fair amount of software support, and any time you installed a device, it required multiple reboots and sometimes even opening the box,” says Ajay Bhatt, who retired from Intel in 2016. “Our goal was that when you get a device, you plug it in, and it works.”

But it was an initial skeptic that first popularized the standard: in a shock to many geeks in 1998, the Steve Jobs-led Apple released the groundbreaking first iMac as a USB-only machine.

Now a new cable design, Type-C, is creeping in on the typical USB Type-A and Type-B ports on phones, tablets, computers, and other devices—and mercifully, unlike the old USB cable, it’s reversible. The next-generation USB4, coming later this year, will be capable of achieving speeds upwards of 40Gbps, which is over 3,000 times faster than the highest speeds of the very first USB.

Bhatt couldn’t have imagined all of that when, as a young engineer at Intel in the early ’90s, he was simply trying to install a multimedia card. The rest is history, one that Joel Johnson plugged in to with some of the key players. more>

Updates from Ciena

Learn about the technology behind Ciena’s WaveLogic 5
By Kacie Levy – If you are like me your to-do lists get longer every day, so finding the time to stay up-to-date on industry trends can be a challenge. Which is why we created Ciena’s Chalk Talk Video series. These videos provide an opportunity for you to spend a few minutes with our experts and learn more about the future of networking.

We recently introduced Ciena’s WaveLogic 5 to the market, our next-gen 800G-capable coherent optical chipset, which includes two distinct solutions to address the divergent requirements network operators and Internet Content Providers are encountering:

  • WaveLogic 5 Extreme: will deliver 800G of capacity over a single wavelength with tunable capacity from 200G, supports customers who need maximum capacity and performance from their networks.
  • WaveLogic 5 Nano: will deliver the strength of Ciena’s coherent optical technology and expertise in footprint-optimized 100G-400G solutions, targeting applications where space and power are the primary considerations.

As Ciena’s Scott McFeely said during the unveiling, there was a lot to unpack in the announcement.

So, in the Chalk Talk Videos below Joe Shapiro, the product manager responsible for Ciena’s WaveLogic Coherent solutions, provides an overview of what each WaveLogic 5 solution is, key technological features, and the benefits of these important solutions. more>

Related>

Electrical power systems for space missions require careful consideration

Requires the optimal combination of primary and secondary sources
By Maurizio Di Paolo Emilio – A satellite needs an energy source to provide perfect performance, with the battery inside it working continuously for many years. The electrical power system is, perhaps, the most fundamental requirement for the satellite payload, as power system failure results in the loss of the space mission. It’s interesting to note that many of the early satellite systems failed due to these power system failures.

Power systems cover all aspects of energy production, storage, conditioning, distribution, and conversion for all types of space applications. Missions can last from a few minutes (launchers) to decades, such as interplanetary probes or the International Space Station (ISS), and can require from a minimum of a few watts (cubes) to tens of kilowatts (large space vehicles for telecommunications such as for the ISS). The electrical loads of a satellite often vary depending on which instruments or subsystems are running at a given time.

Therefore, power systems engineering (also called the electrical power system, or EPS) for satellites requires the selection of the optimal combination of primary and secondary sources for the architecture. more>