Root cause of the battle between the FBI and Apple

In case you are not familiar with details of the high stakes battle between the FBI and Apple, here are some links to help get started [2, 3, 4, 5].

This is another example of legal overreach, which is a natural result of the over representation of legal professionals and under representation of science and technology professionals in the Congress and executive branch of the US government. Besides, of course, the judiciary consists entirely of legal professionals.

For technology (hence, products) to function, materials and processes need to be used consistent with “laws of nature” and nature’s principles. Better aligned the products are with natural laws, better the performance. FBI, on the other hand, works with “man-made laws.” When there is conflict between “laws of nature” and “man-made laws,” the prudent approach is to accept the “laws of nature.”

A similar approach resulted in a huge controversy: “net neutrality debates.” The “net neutrality” is an attempt to apply legal principles to how networks are to be designed (please see, Net neutrality: issues and solution). The result was vacuous debates, which did not address the underlying problems in the network industry. Currently, a related issue (FCC’s authority to regulate internet) is under litigation. The net result is stalled network industry progress for many years, and delay/loss of potential benefits from wider and better availability of broadband.

If the position advocated by the FBI is taken to its logical conclusion, the result will be further decline in the vitality of the US technology sector, loss of marketshare for Apple, and provide opportunities to competitors whose governments may have better appreciation for “laws of nature.”

Posted in Communication industry, Telecom industry | Tagged , , , , | Leave a comment

Lack of clear definitions leading to confusing internet regulations

The ruling by the Telecom Regulatory Authority Of India (TRAI) on the “Free Basics” [2] by Facebook has created more confusion: “Can’t regulate intranet tariffs, says TRAI chief.”

There is lack of concrete definitions for “internet things.” For examples, ‘internet, ‘intranet,’ ‘net neutrality,’ even though the terms are in pervasive use. One of the reasons is internet is still evolving, and various interested parties are staking out their claims.

One way to reduce the confusion is to define regulations with reference to applicable point of connection or interconnection (public interfaces). Currently used ‘public interfaces’ are identified in the diagrams below:

Additional details are available at:


Internet being a “worldwide commons,” claims and counter claims about rights of usage, ownerships, liability and other legal issues are only going to intensity as more and more everyday use applications become available.

The report says, “There are concerns that telecom operators may bypass the TRAI order by providing content – such as movies, videos, health, education, shopping or other such services – at highly subsidized rates through such intranet networks.” This is not a real problem.

The real problem is withholding internet investments, and diverting them to “private networks” — degrading internet performance and making it sub-standard in the process. This is already the case in many parts of India during peak-demand periods (for example, during evenings in high-subscriber-density areas.) In these situations, internet systems are overloaded, making internet access unavailable or unusable.

Two necessary steps to reduce the confusion are:

  1. TRAI and other regulators should clearly define what “internet” is, and
  2. Minimum acceptable service levels for “internet” are various points of connection and interconnection.

If these definitional needs are addressed and minimum service levels are enforced, then the problem of “subsidizing” is a non-issue because the cost of make premium services through “private networks” will far exceed making them available using internet.

Posted in Telecom industry | Tagged , , , , , | Leave a comment

Consultation Process on Differential Pricing for Data Services

Response to Consultation Process on Differential Pricing for Data Services by TRAI (India)

Here are my answers to the questions on Differential Pricing for Data Services.

Question 1: Should the TSPs be allowed to have differential pricing for data usage for accessing different websites, applications or platform?

Breakthrough enabling internet capabilities are rich-media communication and mass-interactions (for example, social media), bypassing distance and time limitations. These capabilities must be made available universally, to maximize the economic benefits of the new medium in business, commerce, education, healthcare, culture, politics and governance. Differential pricing to limit access to internet capabilities will inherently create inequalities with cascading side effects.

Question 2: If differential pricing for data usage is permitted, what measures should be adopted to ensure that the principles of nondiscrimination, affordable internet access, competition and market entry and innovation are addressed?

Differential pricing for access to the internet medium must not be permitted. Internet access should be treated as a “common carrier” public utility. Differential pricing may apply for specific software applications, or service levels — but not to internet content.

Question 3: Are there alternative methods/technologies/business models, other than differentiated tariff plans, available to achieve the objective of providing free internet access to the consumer? If yes, please suggest/describe these methods/technologies/business models. Also describe the potential benefits and disadvantages associated with such methods/technologies/ business models?

Access to internet is primarily a technology issue. Current market confusion is due to attempts to bypass technology constraints through non-technology means. Providing “free internet” is a economic/social policy issue. Therefore, must be achieved through economic/financial methods (subsidies.) Trying to achieve social goals through technology constraints is futile.

There is a mismatch between currently promoted internet architecture and optimum network
architecture to maximize economic benefits. Please see more information in the attached document, “Network Reference Model.”

Question 4: Is there any other issue that should be considered in the present consultation on differential pricing for data services?

There is a market gap in the products currently available for effective and efficient internet access. Steps need to be taken to facilitate commercial availability of products that take maximum advantage of available technologies for access networks because internet access is an intrinsic bottleneck.

Additional details available upon request.

Supplementary information

Here are links to articles written when the “Net Neutrality” controversy/debate was raging in the USA.

(1) Net neutrality: issues and solution

(2) Recommendations to the FCC for the path forward

(3) An Internet Transit Map

(4) Internet “Fast lane” and “Slow lane”

(5) Tragedy of Internet Commons

(6) Financialization in telecom

Please contact us if you have questions or need additional details.

Posted in Communication industry, Net neutrality, Telecom industry | Tagged , , , | Leave a comment

State of Telecommunications

First, Tom Wheeler, the Federal Communication Commission Chairman, must be congratulated for adroitly navigating the “net neutrality” whirlpools [2] and positioning the FCC as the champion of “fast, fair and open” internet. However, there is a risk that future FCC may stray from the high ideals Chairman Wheeler has defined, and may misuse the newly acquired powers. While the FCC decision has brought regulatory clarity in the marketplace, the underlying causes for the market failure remain.

I wrote in 2006, “Factoring implications of technology in business and economic decision making has not kept up with the increased role of technology in the economy.” The tortuous path the FCC took to reclassify internet as a telecommunication service provides an instructive example of the inadequacy of technology policy decision making.

In 2002 FCC decided that internet was an “information service” [2]. As the Internet usage grew, additional demands were placed on the network infrastructure, requiring acceptable new “codes of conduct” for network providers. Rather than continuing to invest in their existing infrastructure, the telecom carriers were enamored with improvements in wireless technology and invested heavily in it. The FCC obliged by permitting blocking and throttling of services for “network management purposes.” While there was an appearance of innovation and new investment by the telcos, in reality their decisions were driven primarily by financial objectives [2].

Initial internet growth was fueled by xDSL technologies. As the market demand grew cable companies started offering internet. But the FCC classified internet provided through cable networks as “information services.” The piecemeal regulatory decisions by the FCC over the years created absurdities. According to the FCC rules, Broadband was a telecommunication service using xDSL technologies subject to regulations, but an information service without regulations when provided using cable network. Verizon was unhappy that it was being regulated, while its competitor Comcast was not, and challenged the decision in the courts. The court upheld Verizon’s challenge and set aside the FCC internet regulations. The revised FCC decision simplifies and brings clarity and uniformity to Broadband regulations.

A review of history may be helpful to better understand the current telecom market configuration. Even though Graham Bell invented the telephone [2] in 1876, its universal adoption was long drawn out and traumatic. After realizing the potential, the then stockholders, in 1907, recruited Theodore Vail (again) to build an organization to fully develop the potential of the technology. Vail had previously demonstrated his organization skills at the Rail Mail Service. Vail realized that the communication possibilities offered by telephone connectivity made it a “natural monopoly.” Vail’s knowledge and insights helped him reach the consensus to make AT&T a government sanctioned monopoly [2]. In return, AT&T agreed to be regulated. To overcome the negative effects of a monopoly business, AT&T instituted a counter balancing organizational social mission – “a single communication system offering the best possible service.”

In later years, AT&T lost its zeal for the social mission. Along with other political factors, a series of developments resulted in the 1984 divestiture [2] and the current market configuration.

Now, telcos no longer have monopoly markets. And their mission has changed to providing the least amount of service to customers they can get away with maximum financial gain. In addition, Comcast is now the largest Broadband provider[2], with the revised FCC broadband definition. Comcast is enjoying “huge profits.” It is an indication of the high barriers to entry in the Broadband market. One of the reasons for the high barrier to entry is the lack of suitable technology for providing cost effective Broadband.

The FCC regulatory framework is based on the historical AT&T monopoly market conditions, when AT&T was also a leading technology developer [2] with social goals. The resource gaps for much needed broadband technology innovations remain unfilled in the current market configuration.

Posted in Communication industry, Telecom industry | Tagged , , , , , , , | Leave a comment

Financialization in telecom

The normal role for finance in the economy is to facilitate trade and production efficiently. Through these transactions profits are generated. However, due to dysfunctional factors, it can become more profitable to use financial methods to generate profits without trade or production. This abnormal role of finance in the economy is termed financialization.

Financialization is “an economic, social and moral disaster: net disinvestment, loss of shareholder value, crippled capacity to innovate, destruction of jobs, exploitation of workers, runaway executive compensation, windfall gains for activist insiders, rapidly increasing inequality and sustained economic stagnation.” [2]

Financialization in the telecom industry has become a destructive force. “AT&T and Verizon say 10Mbps is too fast for “broadband,” 4Mbps is enough” is the best indicator yet of the depth of financialization in telecom. Providing better services will severely limit telcos financial engineering activities. It’s ironic coming from the heirs to the legend built on the promise of providing “best possible service.”

It seems telcos no longer consider it their business to provide services their customers need, illustrated by these reports:

Now, contrast it with how other industries are operating, for example, utilities, auto, or computing. Here are some highlights:

Loss of direction by dominant communication providers has negative cascading effects on the industry. It has decimated a once thriving telecom technology supply chain. Nortel is no more [2, 3]. Alcatel-Lucent “has not earned any money 2006-2013” [2, 3]. Motorola has shrunk dramatically [2, 3].

With all these things going on, one would think that there would be an earnest effort to find out what is wrong. Instead, the preoccupation in the media and industry is with “net neutrality” confusion, which the FCC Chairman summed up: “the idea of net neutrality has been discussed for a decade with no lasting results.”

Posted in Communication industry, Telecom industry | Tagged , , , , , , , | 2 Comments

Tragedy of Internet Commons

The explanation by Verizon about the recent dispute between Netflix and Verizon highlights the problems of inadequate ownership rights [2, 3, 4] and lack of commonly accepted sustainable practices with internet. Another unrelated factor that is making things even more complicated is that internet was not designed to carry video streams.

There are historical precedents for the conflicts we are witnessing with internet — “Tragedy of the Commons” [2, 3, 4, 5, 6, 7, 8, 9, 10]. In medieval England and Europe there was a practice of sharing a common parcel of land as grazing grounds for cattle. Herdsmen will bring their cattle to the common grass fields. The tragedy is that benefits of bringing an additional cattle belong solely to the herdsman, but the problems of over grazing are shared by all.

With internet we have a similar conflict situation. As in the case with grazing grounds, the conflict is a result of property rights and insufficient regulation — self-imposed or external.

The ownership issues related to internet are complex. The Internet Transit Map provides a logical overview of the internet. The connections marked Cloud Access (4) and LAN Switching (7) are the areas of this conflict. The logical structure of the conflicting area is shown in Internet Commons Architecture (below).

The conflict arises due to the multiplicity of ownership, and lack of commonly accepted sustainable practices.

Unlike the medieval grasslands, different parts of internet commons are owned by different parties. The Internet Commons Architecture is one instance of a simplified logical representation of connections in a data center that is shared.

This is how the ownership in a Commons Data Center may be distributed. The Data Center (1) building and land is owned by an internet landlord. The high speed communication lines (2) and the Transmission Switch (3) are owned by Internet Service Providers, who provide connectivity for that facility. The ownership of the Cabinets (5) belong to different Data Center Operators. Within the Cabinets (5), there are Servers (8), LAN/SAN Switches (7), and Distribution Switches (6). In addition, there is cabling connecting these communication systems and servers. The cabinets and the systems within the cabinet may be owned by the same company. Or, the space within a cabinet may be leased out to several companies, who in-turn own the systems within the cabinet.

“Cloud Services” or Software as a Service (SaaS) [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] (10) is another innovation in the Data Center. Without having any type of ownership of anything in the Data Center, SaaS allows running software on the servers only when needed, and paying only for the usage.

The companies leasing space in cabinets may use their own cables for connections or the Data Center operator may provide some of the cables. And “Cloud companies” do not own anything in the Data Center, but use the services of “Cloud Providers,” and generate internet traffic.

In the case of the Netflix-Verizon conflict, the communication lines (2) and Transmission Switches (3) were installed by Verizon. Netflix was using “Cloud Services” (10) provided by Amazon.

About the economics – Adding servers (8), LAN Switches (7) and related cables (9) are relatively inexpensive, compared to the Transmission lines (2), Transmission Switches (3), Distribution Switches (6), and associated cabling (2, 4).

As time went on Netflix usage of “cloud services” increased, even though they had no ownership of the systems in the Data Center. The resulting increase in internet traffic made it necessary to upgrade the Data Center (1) infrastructure facilities that include Transmission lines (2), Transmission Switch (3), Distribution Switch (6), and associated cabling (4). The dispute is who should pay for the upgrade.

The Netflix-Verizon dispute illustrates the need for better clarity on ownership rights, responsibilities and usage rights at various transit points on the Internet — since competing commercial interests are involved.

If you have topics for discussion and/or have questions, please include them in your comments below.

Posted in Communication industry, Telecom industry | Tagged , , , | Leave a comment

Internet Fast and Slow Lanes

One area of confusion in the current internet debate is the “fast lane” and “slow lane” controversy. The FCC Chairman Tom Wheeler said, “I will not allow some companies to force Internet users into a slow lane so that others with special privileges can have superior service.” A closer examination of the way the Internet is constructed will reveal that this is not a real issue, but the real issues are completely different.

There are two kinds of internet connections:

      1. retail connection (subscriber access), and
      2. wholesale connection (among carriers and content providers).

The retail connections are marked (2) and (6) in the Internet Transit Map. The wholesale connections are marked (1), (4) and (5).

Using the transportation analogy, the wholesale connections are freeways, and the retail connections are on/off-ramps. The agreements Netflix have with Comcast, Verizon and others (as far as I can tell) is for wholesale connections.

How the retail connections operate is what is critical for most regular users of the Internet, except for those who may be operating remote servers in co-location centers. The retail connection is the regular Internet access, or broadband access.

For internet access common technologies used are xDSL [2 (pdf), 3], cable [2 (pdf), 3 (pdf)], fiber (pdf) [2, 3, 4, 5, 6, 7, 8, 9 (pdf)], WiFi [2, 3, 4], 3G/4G/LTE [2]. These connections are for a single subscriber to connect to a single or a few computers. Wholesale connections, on the other hand, handle very large number of connections, thousands or even millions of connections.

Now the economics regarding retail and whole connections. The cost involved in the retail connection is on a per subscriber basis. But the cost of wholesale connections is distributed over all the potential users of the wholesale connection. So wholesale connection is very cost effective, while the retail connections are very cost sensitive.

Also, the downstream (towards the subscriber) and the upstream (towards the service provider) speeds are also different for most of the technologies.

About performance, which is what the “fast lane” and “slow lane” controversy is about. Depending on the technology, the performance (peak speed, sustained speed, average speed) vary widely. For xDSL the peak speed, range from 144 kbps to 52 Mpbs and more (downstream), for varying distances. And 144 kbps to 6 Mbps upstream. For cable speed limit is 30Mbps, but most providers offer 1 Mpbs to 6 Mbps downstream, and 128 kpbs to 768 kbps upstream. Unlike other access technologies, cable is a shared medium. The available bandwidth capacity in a cable connection is shared by all the subscribers connected on the shared path. So the actual speed could be much lower if your neighbors also share the same cable, and are using at the same time.

Peak speed for fiber differs depending on the service provider. Peak speed for Google Fiber is 1 Gbps upstream and downstream. Verizon FiOS offers speeds upto 500 Mpbs (downstream) and 100 Mbps (upstream). AT&T U-Verse offers speeds upto 300 Mbps.

The peak speed is what is normally advertised for the subscriber connection. But the actual speed depends on many factors. For example, if you are watching a video clip there is a constant downstream of data (about 8 Mbps for MPEG2 [2]) after you have selected the address (url) of the video you are requesting. But if you are using online chat, there are gaps between interactions, and the amount of data being transferred is small, a few bps.

The peak speed is the maximum capacity of the connection, depending on the access technology. However, the average (and sustained) speed is dependent on the destination system from which the data is requested, and the delay (or how busy) in all the intermediate “Core Routing” and “Edge Routing” nodes. The “Edge Router” connected to each subscriber is usually a bottleneck, since many subscribers are connected to it and can become overloaded.

Optimum performance of subscriber devices require efficient performance by the “Edge Routers” connecting to them, since it has to manage the traffic from all the subscribers connected to it. Thus “Edge Router” can become performance bottleneck, decreasing the data speed (increasing the delay) experienced by the subscribers. So rather than focusing on “fast line” and “slow lane”, what is relevant is the performance and capacity of “Edge Routers.”

Service providers have a built-in incentive not to upgrade “Edge Routers” for optimum performance, since it increases the overall network load. “Edge Routers” of some network providers regularly under perform.

One of the past FCC decisions makes matters worse. The FCC has ruled that the service providers may throttle user data for traffic management purposes. This is a mistake. Instead the FCC should develop performance measures for “Edge Routers,” since it provides a simplified way to assure subscriber service (speed) levels.

The issue of “bandwidth abuse” (heavy bandwidth users) [2, 3, 4] is a separate issue and needs to be handled separately.

If you have topics for discussion and/or have questions, please include them in your comments below, or send them directly.

Posted in Communication industry, Net neutrality, Telecom industry | Tagged , , , , | Leave a comment

An Internet Transit Map

“In the world of tech policy there are few issues more conflict-laden and wrapped up in misunderstandings than net neutrality,” says Doug Brake in The Hill. There is no shortage of internet maps [2, 3]. But what is missing is a “transit map” for the Internet — a users guide for the Internet.

A transit map, commonly used in subways, is a simplified logical diagram of the transit network to make it easy to use the transportation network. To make matters worse, as the future universal medium for human communication and interaction, issues related to the Internet naturally involve technology, jurisprudence, economics, commerce, finance, consumer protection, market monopoly issues, government oversight, politics, economic development — to name a few. So it is easy to add to the discussion issues that are not relevant or material and create confusion.

Unlike many media-anointed experts, I have spent years designing and developing network systems and applications — invented and patented technologies for improving networks. To help clarify the issues, I created an Internet Transit Map (below). The Internet Transit Map is a simplified logical diagram (“reference model”) of the Internet to provide clarity for discussions about regulating the Internet.

The Internet Transit Map (ITM) shows the top level logical systems and critical interconnections. There are two primary types of routers:

        1. Core routers, and
        2. Edge routers.

Different manufacturers offer different products that differ in functionality, performance and capacity.

Details about products Cisco offers are available here [2, 3].

Details about Juniper products are available here [2, 3].

Details about products from Alcatel-Lucent are available here [2, 3].

Details about solutions from Ericsson are available here [2, 3].

The nodes marked “Core Routing” and “Edge Routing” in the Internet Transit Map may represent a single product configuration, or an entire network. For example, one “Core Routing” node could represent the full “Core Routing” in the Internet backbone provided by Sprint (data), or by Deutsche Telekom (IP Transit), or by CenturyLink. Other maps of physical networks are available here [2, 3, 4, 5].

In addition there are at least seven types of interconnections (interfaces) — numbered 1 thru 7 — that are critical for proper functioning of the Internet. These interconnections are made up of different types of hardware products and software stacks that operate over them. Compatibility and interoperability of hardware and software at these interconnections are essential for the Internet to function properly.

The technologies, systems and protocols used in each of these critical components of the Internet are totally different that any generalized discussion of “the Internet” to ensure its “openness” is meaningless. Issues need to be identified, discussed and resolved with respect to each of the interconnections (1-7) in the Internet Transit Map.

Additional reason for the confusion the numerous topics that are involved:

If you have topics for discussion and/or have questions, please include them in your comments below.

Posted in Communication industry, Net neutrality | Tagged , , , , , , , | 1 Comment

Recommendations to the FCC for the path forward

The FCC Chairman, Tom Wheeler, wrote accurately in his blog, “the idea of net neutrality (or the Open Internet) has been discussed for a decade with no lasting results.” The stalemate is the result of an attempt to solve technology problems using legal and political methods.

Designing network systems involve making tradeoffs for efficient resource allocation, functionality and preventing deadlocks. This inherently involves treating different sets of bits differently. Hence trying to mandate “principles of equality” in network design is a meaningless exercise, as has been demonstrated.

The legal root of the current conundrum is the past FCC mistake of classifying the Internet as an “information service,” exempt from FCC regulations. After exempting the Internet from regulations, the appeals court logically rebuffed efforts to regulate the Internet through fuzzy linguistics.

The intrinsic dilemma with Internet access is cost per connection is not a constant, but varies with technology and the “deployment distance” for each termination. Result is higher cost in less populated areas (disregarding the affordability factor.) Hence, some form of network deployment subsidy is necessary in the current market configuration.

RecommendationsTherefore, the long term solution is to declare that the Internet access will be regulated to conform to “common carrier principles” that have evolved over the centuries — starting with ferry operators — taking into account the unique attributes of the Internet. In exchange for accepting network deployment subsidies, network providers must comply with the common carrier principles for Internet access, to be formulated.

Obviously, this requires action by the Congress, which is messy, complicated and long drawn-out. But a declaration to that effect will provide clarity in the marketplace, and may even speed up the Congressional process.

The rise of the Internet, the divestiture of the AT&T [2, 3, 4, 5, 6], and the downsizing of the Bell Labs [2, 3, 4, 5, 6, 7, 8] created a market vacuum. The Bell Labs used to be the final technology authority regarding networks. Parts of the Bell Labs were absorbed into different entities, dispersing the knowledge and expertise accumulated over a century. The current FCC regulatory framework presupposes the existence an external technology authority.

The current network market structure is vastly different from the monopoly market when the FCC was formed. As history has shown, the FCC cannot fulfill its mandate without independent technology expertise. Current legal-centric FCC processes may have been adequate, when quasi-independent technology expertise was available externally. But changes are necessary to effectively manage the changed market structure. The simplest solution will be to develop that expertise internally, within the FCC, and enhance current legal-centric processes — to factor in technology-driven constraints, limitations and possibilities.

The legal-centric FCC processes also have a secondary deleterious impact. Issues related to networks belong primarily in the technology domain. Superimposing a legal framework on technology evolution can create unhelpful distortions. Technology issues and problems need to be addressed as such. The “net neutrality discussions” is an illustrative example of how not to do technology policy.

In addition to the complications created by rapid development of new technologies and wholesale changes of market structures, the idiosyncratic behaviors of financial markets were also in play — as the dot-com bubble. The result is widespread misconceptions about “Internet.” One critical issue is the mis-identification of the success factors of the Internet.

The “magic of the Internet” is the universal adoption of public common standards and practices, which resulted in the astonishing benefits generated. The key to continuing the “magic of the Internet” is making sure that the common standards and practices are followed so that unified network and application interoperability are maintained, helping continued development of a vibrant marketplace.

The focus of regulatory oversight need to be shifted from the current “Internet focus” to “open standards, interfaces and practices.” A critical precursor to that step is for the FCC to become strictly “technology neutral,” allowing the free market to operate creating the best possible network capabilities with public common standards, interfaces and practices.

Posted in Communication industry, Net neutrality | Tagged , , , , , | Leave a comment

How to learn effectively using the internet

It is common knowledge that the internet is a treasure trove of information. And one of the often repeated applications of the internet and broadband is education. But using the internet as a learning tool is easier said than done.

There are many challenges for learning with internet. Television and social media have contributed to declining attention span (“You Now Have a Shorter Attention Span Than a Goldfish“). And internet is full of distractions that divert attention easily. Commercial use of internet for commerce, advertising and marketing purposes does not help either. Learning is hard work, which is another inhibiting factor.

However, internet bloging can be an effective learning tool. Reading information has a retention rate of less than 20%. When using bloging as a learning tool, you are messing with the content and analyzing it from different perspectives, enhancing retention and comprehension by making it an active learning process, increasing comprehension over 75%.

To get started the essential skill is self-discipline [2, 3], since it is self directed learning. Curiosity and motivation to learn are also essential.

The system, Net Learning ClusterTM, described here is developed by the author, resulting from a practical need (more details below.)

Net Learning Cluster (NLC) consists of structured use of easily available tools on the internet. Net Learning Cluster helps with learning subject matter, concentration, and language skills: reading, editing, authoring, messaging and more.

Net Learning Cluster turns internet browsing into a goal directed activity to create (bookmark) blog posts. Social Bookmarking [2, 3] is one of the most popular Social Media applications on the internet. WordPress makes this process as easy as 1, 2, 3, 4

This is how it works.

Allocate a dedicated time for this learning effort. Your learning goal needs to be sufficiently broad to be effective, and must be practiced regularly.

For each Net Learning Cluster session, have a large number of articles and web pages ready for review, so that you don’t spend time searching for material during the session. With each web page or article ask yourself: Am I interested in reading this again? Next week? Next month? Next Year? Learn to arrive at this decision rapidly. If the answer is ‘No’, proceed to the next one. This step will help improve your reading, comprehension, and decision making skills.

If the answer is yes, then identify the main idea or key points and create a blog post, with a link to the web page. Business Exchange used this idea. For examples, please review the posts in the News links blog.

If the information is something you feel strongly that others need to read as well, then create a synopsis. The aim of the synopsis is to motivate the reader to visit the original site. Then use the power of the blogging tools, and the internet as a rich media to make the synopsis as compelling as possible. However, you must be mindful of copyright, since the original content belongs to someone else.

To learn how to create compelling synopsis from articles mindful of copyright, study how blog posts are constructed in the Net economy.

Once you practice these steps or you already have ideas of your own that you want to write about, Viewpoint is a place for original articles.

To use this methodology for self-directed learning, it is not necessary to use the blogs provided as examples. You may create your own blogs. But collaboration available with the example blogs cited will be missing. There is at least one research report using this methodology.

This methodology, Net Learning Cluster, was developed by the author with a learning objective: How does the United States government work? More specifically, how does the Federal Communications Commission (FCC) operate? The article, Net neutrality: issues and solution, is a result of this exercise.

If you have reached this far, take the next step: become a contributor.

Posted in Communication industry | Tagged , , | 1 Comment