Tag Archives: Internet

Updates from Chicago Booth

Machine learning can help money managers time markets, build portfolios, and manage risk
By Michael Maiello – It’s been two decades since IBM’s Deep Blue beat chess champion Garry Kasparov, and computers have become even smarter. Machines can now understand text, recognize voices, classify images, and beat humans in Go, a board game more complicated than chess, and perhaps the most complicated in existence.

And research suggests today’s computers can also predict asset returns with an unprecedented accuracy.

Yale University’s Bryan T. Kelly, Chicago Booth’s Dacheng Xiu, and Booth PhD candidate Shihao Gu investigated 30,000 individual stocks that traded between 1957 and 2016, examining hundreds of possibly predictive signals using several techniques of machine learning, a form of artificial intelligence. They conclude that ML had significant advantages over conventional analysis in this challenging task.

ML uses statistical techniques to give computers abilities that mimic and sometimes exceed human learning. The idea is that computers will be able to build on solutions to previous problems to eventually tackle issues they weren’t explicitly programmed to take on.

“At the broadest level, we find that machine learning offers an improved description of asset price behavior relative to traditional methods,” the researchers write, suggesting that ML could become the engine of effective portfolio management, able to predict asset-price movements better than human managers. more>

Related>

Updates from Ciena

Breaking Down the Barriers Between IT & Network
By James Crawshaw – “Digital transformation” initiatives in the telecom sector generally fall into one of three key categories: customer engagement, new services, and operational agility. The first category is all about meeting customer expectations for ease of ordering, delivery and problem resolution – for today’s existing services.

The second category is about finding new sources of revenue either by becoming aggregators of third-party content and services (platform companies), or by enabling internal innovation through the adoption of DevOps and a fast-fail mentality.

The third category may be less sexy, but it is no less important. Increased agility of network and IT operations through greater automation not only has potentially significant cost saving benefits, it is also an enabler of the better customer experience and faster time-to-market that underpin the first two transformation categories.

One of the great challenges with automation in the telecom industry is that the networking and IT domains remain heavily siloed in many service providers today with hundreds or even thousands of manual processes required to map data from Operation Support Systems (planning, fulfillment, assurance, etc.) to network management and orchestration systems.

Not only does this lead to a lot of “swivel-chair” operations to bridge the gap but fragmented data systems reduce the visibility into real-time service and network state.

The quick fix is to over-provision network resources to cope with this lack of visibility but that leads to unnecessarily high capex in addition to the opex overhead associated with highly manual operations. more>

Related>

Updates from Adobe

Bringing Language to Life
By Amy Papaelias – Isabel Lea didn’t expect to fall down the rabbit hole of variable font technology. But since the London-based graphic designer started the Adobe Creative Residency in May 2018, she’s repeatedly found herself at the intersection between technological experimentation and typographic innovation.

If you haven’t spent much time on that particular corner, you may not be familiar with the variable font format. It can reduce web font file sizes and give you loads of typographic variations. (Let’s say you’re unsuccessfully searching for a condensed but slightly bold version of a typeface for a web design. If you choose a variable font, you simply tweak the font’s values using CSS until you get exactly what you’re after.)

However, the possibilities go way beyond the typographically practical, into animation and other areas people are just beginning to explore.

Lea first learned about variable fonts at a two-week intensive type design course at the University of Reading’s Department of Typography. “We had a hands-on workshop where we were looking at variable fonts,” says Lea.

“I thought, ‘Great, you can make a font pulse. Can you make it pulse to something, like music?'” more>

Related>

The new spirit of postcapitalism

Capitalism emerged in the interstices of feudalism and Paul Mason finds a prefiguring of postcapitalism in the lifeworld of the contemporary European city.
By Paul Mason – Raval, Barcelona, March 2019. The streets are full of young people (and not just students)—sitting, sipping drinks, gazing more at laptops than into each other’s eyes, talking quietly about politics, making art, looking cool.

A time traveler from their grandparents’ youth might ask: when is lunchtime over? But it’s never over because for many networked people it never really begins. In the developed world, large parts of urban reality look like Woodstock in permanent session—but what is really happening is the devalorization of capital.

But just 20 years after the roll-out of broadband and 3G telecoms, information resonates everywhere in social life: work and leisure have become blurred; the link between work and wages has been loosened; the connection between the production of goods and services and the accumulation of capital is less obvious.

The postcapitalist project is founded on the belief that, inherent in these technological effects lies a challenge to the existing social relations of a market economy, and in the long term, the possibility of a new kind of system that can function without the market, and beyond scarcity.

But during the past 20 years, as a survival mechanism, the market has reacted by creating semi-permanent distortions which—according to neoclassical economics—should be temporary.

In response to the price-collapsing effect of information goods, the most powerful monopolies ever seen have been constructed. Seven out of the top ten global corporations by market capitalization are tech monopolies; they avoid tax, stifle competition through the practice of buying rivals and build ‘walled gardens’ of interoperable technologies to maximize their own revenues at the expense of suppliers, customers and (through tax avoidance) the state. more>

Updates from Ciena

AI Ops: Let the data talk
The catalysts and ROI of AI-powered network analytics for automated operations were the focus of discussion for service providers at the recent FutureNet conference in London. Blue Planet’s Marie Fiala details the conversation.
By Marie Fiala – Do we need perfect data? Or is ‘good enough’ data good enough? Certainly, there is a need to find a pragmatic approach or else one could get stalled in analysis-paralysis. Is closed-loop automation the end goal? Or is human-guided open loop automation desired? If the quality of data defines the quality of the process, then for closed-loop automation of critical business processes, one needs near-perfect data. Is that achievable?

These issues were discussed and debated at last week’s FutureNet conference in London, where the show focused on solving network operators’ toughest challenges. Industry presenters and panelists stayed true to the themes of AI and automation, all touting the necessity of these interlinked software technologies, yet there were varied opinions on approaches. Network and service providers such as BT, Colt, Deutsche Telekom, KPN, Orange, Telecom Italia, Telefonica, Telenor, Telia, Telus, Turk Telkom, and Vodafone weighed in on the discussion.

On one point, most service providers were in agreement: there is a need to identify a specific business use case with measurable ROI, as an initial validation point when introducing AI-powered analytics into operations. more>

Related>

How digital technology is destroying our freedom

“We’re being steamrolled by our devices” —Douglas Rushkoff
By Sean Illing – There’s a whole genre of literature called “technological utopianism.” It’s an old idea, but it reemerged in the early days of the internet. The core belief is that the world will become happier and freer as science and technology develops.

The role of the internet and social media in everything from the spread of terrorist propaganda to the rise of authoritarianism has dampened much of the enthusiasm about technology, but the spirit of techno-utopianism lives on, especially in places like Silicon Valley.

Douglas Rushkoff, a media theorist at Queens College in New York, is the latest to push back against the notion that technology is driving social progress. His new book, Team Human, argues that digital technology in particular is eroding human freedom and destroying communities.

We’re social creatures, Rushkoff writes in his book, yet we live in a consumer democracy that restricts human connection and stokes “whatever appetites guarantee the greatest profit.” If we want to reestablish a sense of community in this digital world, he argues, we’ll have to become conscious users of our technology — not “passive objects” as we are now.

But what does that mean in practical terms? Technology is everywhere, and we’re all more or less dependent upon it — so how do we escape the pitfalls? more>

How to govern a digitally networked world

Because the internet is a network of networks, its governing structures should be too. The world needs a digital co-governance order that engages public, civic and private leaders.
By Anne-Marie Slaughter and Fadi Chehadé – Governments built the current systems and institutions of international cooperation to address 19th- and 20th-century problems. But in today’s complex and fast-paced digital world, these structures cannot operate at ‘internet speed’.

Recognizing this, the United Nations secretary-general, António Guterres, last year assembled a high-level panel—co-chaired by Melinda Gates and the Alibaba co-founder Jack Ma—to propose ways to strengthen digital governance and cooperation. (Fadi Chehadé, co-author of this article, is also a member.) It is hoped that the panel’s final report, expected in June, will represent a significant step forward in managing the potential and risks of digital technologies.

Digital governance can mean many things, including the governance of everything in the physical world by digital means. We take it to mean the governance of the technology sector itself, and the specific issues raised by the collision of the digital and physical worlds (although digital technology and its close cousin, artificial intelligence, will soon permeate every sector).

Because the internet is a network of networks, its governing structures should be, too. Whereas we once imagined that a single institution could govern global security or the international monetary system, that is not practical in the digital world. No group of governments, and certainly no single government acting alone, can perform this task.

Instead, we need a digital co-governance order that engages public, civic and private leaders on the basis of three principles of participation.

First, governments must govern alongside the private and civic sectors in a more collaborative, dynamic and agile way.

Secondly, customers and users of digital technologies and platforms must learn how to embrace their responsibilities and assert their rights.

Thirdly, businesses must fulfill their responsibilities to all of their stakeholders, not just shareholders. more>

Updates from Ciena

Protecting your business from cyber threats
The phone rings — there’s been a breach. Ciena’s chief security architect Jim Carnes explains how to integrate security into each aspect of your business to mitigate this stressor – and stop fearing that call.

By Jim Carnes – It’s Friday afternoon (it always happens on Friday afternoon) and the phone rings — there’s a breach. Your internet provider has called and malware associated with the latest botnet has been detected coming from your corporate network. The incident response plans are triggered and everyone goes into high alert, looking for the source.

The common thought trajectory goes something like: How could this happen? We use the latest and greatest security products. Did someone open a phishing email? Did a hacker breach our firewall or was a vendor compromised? There goes my weekend.

How can we stop fearing that Friday afternoon call?

Integrating security into each aspect of your business could mitigate this stressor. When people, processes, inventory and technology are coordinated, the fear and uncertainty of security breaches is replaced with straightforward and seamless responses that protect your Friday evening dinner plans.

The conversation should always begin with your business. You need to understand the processes, the people and the vendor and partner relationships. Understanding how the critical aspects of the company function and interact will often point to gaps in security.

Are the tools that facilitate secure business processes in place? Look for:

  • Single-sign solutions to ease integration of people and technology
  • Multi-factor authentication solutions that ease the password management burden on users (compromised passwords are responsible for nearly half of organizations that are breached according to the 2017 Verizon DBIR)
  • Product suites that integrate business processes and technology solutions
  • Secure supply chains that enumerate the risks to both hardware and software solutions while protecting them (a white paper published by the SANS Institute offers guidance on combating supply chain cyber risk)

Whether your business is delivering software, hardware or services, the development of those solutions include security from the start. The ability to clearly articulate the purpose of the system, how it will be used, who will be using it and what value it provides will help begin the conversation. Articulating these key factors will help define the threat environment, the adversaries and the controls necessary to mitigate the attacks.

Mitigations will therefore have context and be able to address real threats, rather than generic ones. more>

Related>

Updates from ITU

What do ‘AI for Social Good’ projects need? Here are 7 key components.
By Anna Bethke – At their core, ‘AI for Social Good’ projects use artificial intelligence (AI) hardware and software technologies to positively impact the well-being of people, animals or the planet – and they span most, if not all, of the United Nations Sustainable Development Goals (SDGs).

The range of potential projects continues to grow as the AI community advances our technology capability and better understands the problems being faced.

Our team of AI researchers at Intel achieved success by working with partners to understand the problems, collecting the appropriate data, retraining algorithms, and molding them into a practical solution.

At their core, an AI for Social Good project requires the following elements:

  1. A problem to solve, such as improving water quality, tracking endangered species, or diagnosing tumors.
  2. Partners to work together in defining the most complete view of the challenges and possible solutions.
  3. Data with features that represent the problem, accurately labeled, with privacy maintained.
  4. Compute power that scales for both training and inference, no matter the size and type of data, or where it lives. An example of hardware choice is at ai.intel.com/hardware.
  5. Algorithm development, which is the fun part! There are many ways to solve a problem, from a simple logistic regression algorithm to complex neural networks. Algorithms match the problem, type of data, implementation method, and more.
  6. Testing to ensure the system works in every way we think it should, like driving a car in rain, snow, or sleet over a variety of paved and unpaved surfaces. We want to test for every scenario to prevent unanticipated failures.
  7. Real-world deployment, which is a critical and complicated step that should be considered right from the start. Tested solutions need a scalable implementation system in the real world, or risk its benefits not seeing the light of day.

At the end of May, Intel AI travels to Geneva, Switzerland, for the UN’s AI for Good Global Summit hosted by ITU and will speak to each of these elements in a hands-on workshop. more>

Related>

Updates from Ciena

The benefits of an integrated C&L-band photonic line system
Network providers are looking for new alternatives to unlock additional network capacity. Ciena’s Kent Jordan explains how upgrading to the L-band can help – if done in the right way.
By Kent Jordan – The photonic layer is the foundation for high capacity networks. Whether the network application is to increase connectivity between data centers, deliver bandwidth-intensive content, or to move business applications into the cloud, the photonic layer provides the mechanism to efficiently light the fiber by assigning and routing wavelengths across the optical spectrum. However, today’s photonic layer systems utilize only a portion of the usable spectrum within the fiber, and operators are increasingly looking at expansion into the L-band to increase capacity.

There are a few factors driving the desire for L-band. First, and foremost, is traffic demand. Networks with high bandwidth applications and sustained bandwidth growth are quickly faced with capacity exhaustion. Once existing capacity is consumed, lighting additional fiber pairs is required. If the cost of laying or leasing new fiber is too prohibitive, then alternatives to unlocking additional capacity are needed.

The L-band is one such solution, and it can be used to double the fiber capacity. But, for operators to consider deploying L-band solutions, they must be simple to plan and deploy, and the upgrade to L-band must not impact existing traffic in the C-band.

Building the foundation for a scalable network infrastructure isn’t just about knowing what building blocks to use. It also includes selecting the appropriate architecture and understanding how the pieces fit together, so when it is time to increase capacity, there aren’t any surprises, performance hits, or suboptimal capacity limits. more>

Related>