Tag Archives: Productivity

Updates from Siemens

Siemens Case Study: Lean Digital Factory Project

By Gunter Beitinger – In October 2017, Siemens launched their Lean Digital Factory (LDF) program. Combining a group of experts from different business functions and technology units, its purpose is to define a conceptual holistic digital transformation roadmap for all factories of the operating company Digital Industries (DI).

To fully capture the value of using big data in manufacturing, the plants of DI needed to have a flexible data architecture which enabled different internal and external users to extract maximum value from the data ecosystem. Here, the Industrial Edge layer comes into the picture, which processes data close to the sensors and data source (figure).

The Industrial Edge and data lake concept will enable a more powerful solution than any other data storage and utilization concept:

  • The MDP will be a colossal storage area for all manufacturing data and will be tremendously powerful for all user levels
  • The MDP data platform is a centralized and indexed aggregation of distributed organized datasets
  • Big data will be stored in the MDP independently of its later use, this means as raw data
  • In combination with Industrial Edge, the MDP is the pre-requisite for effective and scalable cloud computing and machine learning
  • The Industrial Edge is used in this architecture for multiple purposes like data ingestion, pre-preparation, security-gate, real-time decisions.
  • Highly integrated, but module and service-based ecosystem functionalities.

In DI, it can be challenging to harness the potential of digitalization at full scale due to installed proprietary software solutions, customized processes, standardized interfaces and mixed technologies. However, at Siemens, this doesn’t mean that we ran a large standardization program before leveraging the possibilities of data analytics and predictive maintenance in our plants.

To get rubber on the road at large scale, we required an architectural concept which allowed us to develop applications, scale up and transfer solutions from plant to plant, from engineering to shop floor as well as supplier to customer and reuse identified process insights from one application to another. more>

Related>

No, Productivity Does Not Explain Income

Marginal productivity is a thought virus that is sabotaging the scientific study of income.
By Blair Fix – Did you hear the joke about the economists who tested their theory by defining it to be true? Oh, I forgot. It’s not a joke. It’s standard practice among mainstream economists. They propose that productivity explains income. And then they ‘test’ this idea by defining productivity in terms of income.

The marginal productivity theory of income distribution was born a little over a century ago. Its principle creator, John Bates Clark, was explicit that his theory was about ideology and not science. Clark wanted show that in capitalist societies, everyone got what they produced, and hence all was fair.

Clark was also explicit about why his theory was needed. The stability of the capitalist order was at stake!

Clark created marginal productivity theory to explain class-based income — the income split between laborers and capitalists. But his theory was soon used to explain income differences between workers.

In the mid 20th century, neoclassical economists invented a new form of capital. Workers, the economists claimed, owned ‘human capital’ — a stock of skills and knowledge. This human capital made skilled workers more productive, and hence, made them earn more money. So not only did productivity explain class-based income, it now explained personal income.

Given the problems with comparing the productivity of workers with different outputs, you’d think that marginal productivity theory would have died long ago. After all, a theory that can’t be tested is scientifically useless.

Fortunately (for themselves), neoclassical economists don’t play by the normal rules of science. If you browse the economics literature, you’ll find an endless stream of studies claiming that wages are proportional to productivity. Under the hood of these studies is a trick that allows productivity to be universally compared. And even better, it guarantees that income will be proportional to productivity. more>

Updates from Adobe

Reality-Defying Photo Composites Master the Impossible
By Jordan Kushins – Juan José Egúsquiza is based in Brooklyn, New York, but he spends much of his time as a man of the world. From Paris to San Francisco to Barcelona to Lucerne and beyond, the Lima, Peru-born multimedia artist and Adobe Creative Resident makes his way across the globe with his camera in hand. While exploring, he captures ordinary moments with a click, and these images become the basis for what he calls “Impossible Stories”: brain-bending composites that challenge the way we relate to and interpret our surroundings.

When I was young, I played music—percussion, mostly—and was in a band with my twin brother, who’s also an artist. He was so creative, and making things all the time, often grabbing trash and turning it into sculptures or instruments. That idea of recycling—of taking elements that were meant to be for something and then using them to build something else—was super, super cool to me. At some point I realized I wanted to start creating my own special things as well.

I was 19 or 20 when I first started taking pictures. I’d be traveling, mostly alone, and all of a sudden I’d be somewhere I’ve never been before: walking around, seeing new things, observing ordinary moments. I’ve always liked those the most; like, someone throwing a cookie away in a garbage can. Once you take a picture of it, it becomes something totally different.

At first, I wouldn’t edit my images at all, but eventually I started thinking: “What if I grabbed one element from this image and put it on something else?” Now that kind of photo compositing is a daily practice. more>

Related>

Updates from Siemens

Digitalize battery manufacturing for a greener future with electric vehicles

By Vincent Guo – The electrification of automobiles is gaining momentum globally as many countries have laid out plans to prohibit the sales of internal combustion engine (ICE) cars. The closing deadline is 2025 for the Netherlands and Norway, with Germany and India to be the next down the line in 2030, followed by UK and France in 2040. Other major economies in the world provide aggressive initiatives to push the electric vehicle (EV) to the market. For example, USA, China, Norway, Denmark, and South Korea have been implementing cash subsidiaries to EV buyers over $10,000 per vehicle, with Denmark and South Korea paying the consumer almost 20,000 Euro for each car purchased.

These incentive plans, however, also indicate that the price of EV is still high comparing to traditional cars. Independent research shows that the cost of the electrical powertrain is roughly 50% of the EV while, while the cost of the powertrain for ICE cars is only 16%. While it is largely true that the components of a car, whether it is an EV or ICE car, are largely similar except for the powertrain, the source of the difference in total cost is obviously the powertrain. The most expensive component is the battery pack, which accounts for roughly half of the powertrain and a quarter of the entire car.

Fortunately, the cost of the battery is going down steadily in the past 10 years. It is about to hit the point that the total cost of an EV is competitive to an ICE car and the point is about 125-150 USD/kWh.

As a result, battery manufacturing capacity has been ramping up quickly. Tesla is leading the way by its Gigafacotry in Nevada with target annual capacity of 35 GWh. Yet the race is tight as the battery manufacturing in Asia is catching up. CATL of China had recently announced a plan to boost its capacity in Germany to 100 GWh. more>

Related>

Takers and Makers: Who are the Real Value Creators?

By Mariana Mazzucato – We often hear businesses, entrepreneurs or sectors talking about themselves as ‘wealth-creating’. The contexts may differ – finance, big pharma or small start-ups – but the self-descriptions are similar: I am a particularly productive member of the economy, my activities create wealth, I take big ‘risks’, and so I deserve a higher income than people who simply benefit from the spillovers of this activity. But what if, in the end, these descriptions are simply just stories? Narratives created in order to justify inequalities of wealth and income, massively rewarding the few who are able to convince governments and society that they deserve high rewards, while the rest of us make do with the leftovers.

If value is defined by price – set by the supposed forces of supply and demand – then as long as an activity fetches a price (legally), it is seen as creating value. So if you earn a lot you must be a value creator.

I will argue that the way the word ‘value’ is used in modern economics has made it easier for value-extracting activities to masquerade as value-creating activities. And in the process rents (unearned income) get confused with profits (earned income); inequality rises, and investment in the real economy falls.

What’s more, if we cannot differentiate value creation from value extraction, it becomes nearly impossible to reward the former over the latter. If the goal is to produce growth that is more innovation-led (smart growth), more inclusive and more sustainable, we need a better understanding of value to steer us.

This is not an abstract debate.

It has far-reaching consequences – social and political as well as economic – for everyone. How we discuss value affects the way all of us, from giant corporations to the most modest shopper, behave as actors in the economy and in turn feeds back into the economy, and how we measure its performance. This is what philosophers call ‘performativity’: how we talk about things affects behavior, and in turn how we theorize things. In other words, it is a self-fulfilling prophecy.

If we cannot define what we mean by value, we cannot be sure to produce it, nor to share it fairly, nor to sustain economic growth. The understanding of value, then, is critical to all the other conversations we need to have about where our economy is going and how to change its course. more>

Updates from Siemens

Gruppo Campari: Brand spirits leader digitizes its business operations with the SIMATIC IT suite
Using Siemens technology, Gruppo Campari has created a unified repository for all product specifications and increased the efficiency of product development and manufacturing processes
Siemens – With so much talk about securing the Italian control of key businesses, a few companies play offense and take the Italian lifestyle and “Made in Italy” all over the world. Among them is Gruppo Campari, which closed 26 acquisitions in the spirits industry in the past two decades to become the world’s sixth player, with over 50 premium and super-premium brands. Besides aperitifs of international renown (Campari, Aperol), the portfolio includes bitter liqueurs (Averna, Cynar, Braulio) and spirits (Skyy, Grand Marnier, GlenGrant, Wild Turkey, Appleton). In 2016 the group exceeded €1.7 billion in consolidated revenues, with most sales in Americas and the Southern Europe, Middle East and Africa (SEMEA) region.

With each acquisition, Gruppo Campari needs to integrate new products, plants and assets into its operations management systems. Recent examples include J. Wray & Nephew, a company with more than 2,000 employees producing Jamaica’s 225-yearold top rum Appleton Estate, Grand Marnier in France acquired in 2016 and Bulldog London Dry Gin in 2017. Currently, the group operates 58 sites: 18 owned factories, 22 co-packers and 18 distribution centers, counting up to thousands of materials and specifications.

The turning point for the management of such a complex and constantly evolving organization came in 2012. Until then, Gruppo Campari had maintained an unstructured approach to the management of product specifications, which were created locally using Microsoft Word documents or Microsoft Excel® spreadsheets. Besides creating documents in different formats and languages, there was no standard workflow for document authoring and validation, and information was shared via email or phone.

In 2012, the Group launched an extensive digitalization of operation processes, selecting SIMATIC IT Interspec from Siemens PLM Software, a configurable solution for product specification management in process industries, and embracing the Siemens “digitalization” philosophy.

SIMATIC IT Interspec allows the company to develop, configure and manage all product specifications (raw materials, intermediate and finished products and packaging materials), storing all specifications in a single, controlled data repository. more>

Related>

Updates from Siemens

Revolutionizing Plant Performance with the Digital Twin and IIoT eBook
By Jim Brown – How can manufacturers use the digital twin and industrial IoT to dramatically improve manufacturing and product performance?

The manufacturing industries are getting more challenging. Manufacturers must evolve as new technologies remove barriers to entry and enable new, digital players to challenge market share. Operational efficiency is no longer enough to compete in today’s era of digitalization and Industry 4.0.

To remain competitive, companies have to maintain high productivity while offering unprecedented levels of flexibility and responsiveness. We believe this is a fundamental disruption that will change the status quo. To survive, manufacturers need to digitalize operations in order to improve speed, agility, quality, costs, customer satisfaction, and the ability to tailor to customer and market needs.

One of the most compelling digitalization opportunities is adopting the digital twin. This approach combines a number of digital technologies to significantly improve quality and productivity. It starts with comprehensive, virtual models of physical assets – products and production lines – to help optimize designs. But the value is much greater because the physical and virtual twins are connected and kept in sync with real data from the Internet of Things (IoT) and Industrial IoT (IIoT).

Further, companies can use analytics to analyze digital twin data to develop deep insights and intelligence that allow for real-time intervention and long-term, continuous improvement.

The digital twin holds significant productivity and quality opportunities for the plant. It can be used to understand when the plant isn’t operating as intended. It can identify or predict equipment issues that can result in unplanned downtime or correct process deviations before they result in quality slippage, scrap, and rework. more>

Related>

Updates from Georgia Tech

He Quieted Deafening Jets
By Ben Brumfield – In 1969, the roar of a passing jet airliner broke a bone in Carolyn Brobrek’s inner ear, as she sat in the living room of her East Boston home. Many flights took off too close to rooftops then, but even at a distance, jet engines were a notorious source of permanent hearing loss.

For decades, Krishan Ahuja tamed jet noise, for which the National Academy of Engineering elected him as a new member this year. Today, Ahuja is an esteemed researcher at the Georgia Institute of Technology, but he got his start more than 50 years ago as an engineering apprentice in Rolls Royce’s aero-engine division, eventually landing in its jet noise research department.

Jet-setters had been a rare elite, but early in Ahuja’s career in the 1970s, air travel went mainstream, connecting the globe. The number of flights multiplied over the years, and jet engine thrust grew stronger, but remarkably, human exposure to passenger jet noise in the same time period plummeted to a fraction of what it had once been, according to the Federal Aviation Administration.

Ahuja not only had a major hand in it, he also has felt the transition himself.

“In those days, if jets went over your house and you were outside, you’d feel like you needed to put your hands over your ears. Not today,” said Ahuja, who is a Regents Researcher at the Georgia Tech Research Institute (GTRI) and Regents Professor in Georgia Tech’s Daniel Guggenheim School of Aerospace Engineering. more>

Related>

>

Updates from Siemens

Why the aerospace industry must adopt condition-based maintenance
By Dave Chan and John Cunneen – When the aerospace industry adopts condition-based maintenance and predictive maintenance methods, the cost of owning and operating aircraft is minimized, downtime is reduced and airworthiness is easier to prove.

Unfortunately, many companies seem to simply go through the motions and use antiquated and increasingly unreliable methods to track reliability and, as a result, spend more downtime than needed conducting unnecessary maintenance. This not only increases costs but, more importantly, can put the safety of the aircraft at risk.

In our previous blogs, we discussed the cost of certification and the increasing burdens placed on aircraft companies to prove, through certification documentation, that their aircraft meet the government safety standards established in countries and regions worldwide. We also discussed some of the digital tools available to help manage this process, lower costs, decrease time-to-market and increase availability/readiness.

Digitalization can ease the burden of designing and manufacturing an aircraft, but it’s also a pivotal strategy to implement these digital tools to increase the efficiency of maintaining, repairing and operating the aircraft.

Major industries such as maritime and oil and gas are using condition-based maintenance to lower costs and reduce downtime. With the maritime industry, just like the aerospace industry, reliability, availability, maintainability, and safety (RAMS) are key in keeping a maritime fleet operational. more>

Related>

The unlikely origins of USB, the port that changed everything

By Joel Johnson – In the olden days, plugging something into your computer—a mouse, a printer, a hard drive—required a zoo of cables.

If you’ve never heard of those things, and if you have, thank USB.

When it was first released in 1996, the idea was right there in the first phrase: Universal Serial Bus. And to be universal, it had to just work. “The technology that we were replacing, like serial ports, parallel ports, the mouse and keyboard ports, they all required a fair amount of software support, and any time you installed a device, it required multiple reboots and sometimes even opening the box,” says Ajay Bhatt, who retired from Intel in 2016. “Our goal was that when you get a device, you plug it in, and it works.”

But it was an initial skeptic that first popularized the standard: in a shock to many geeks in 1998, the Steve Jobs-led Apple released the groundbreaking first iMac as a USB-only machine.

Now a new cable design, Type-C, is creeping in on the typical USB Type-A and Type-B ports on phones, tablets, computers, and other devices—and mercifully, unlike the old USB cable, it’s reversible. The next-generation USB4, coming later this year, will be capable of achieving speeds upwards of 40Gbps, which is over 3,000 times faster than the highest speeds of the very first USB.

Bhatt couldn’t have imagined all of that when, as a young engineer at Intel in the early ’90s, he was simply trying to install a multimedia card. The rest is history, one that Joel Johnson plugged in to with some of the key players. more>