Cybersecurity, computer security or IT security is the protection of computer systems from the theft and damage to their hardware, software or information, as well as from disruption or misdirection of the services they provide.
Cybersecurity includes controlling physical access to the hardware, as well as protecting against harm that may come via network access, data and code injection. [1] Also, due to malpractice by operators, whether intentional or accidental, IT security is susceptible to being tricked into deviating from secure procedures through various methods. [2]
The field is of growing importance due to the increasing reliance on computer systems and the Internet, [3] wireless networks such as Bluetooth and Wi-Fi, the growth of "smart" devices, including smartphones, televisions and tiny devices as part of the Internet of Things.
A vulnerability is a weakness in design, implementation, operation or internal control. Most of the vulnerabilities that have been discovered are documented in the Common Vulnerabilities and Exposures (CVE) database.
An exploitable vulnerability is one for which at least one working attack or " exploit" exists. [4] Vulnerabilities are often hunted or exploited with the aid of automated tools or manually using customized scripts.
To secure a computer system, it is important to understand the attacks that can be made against it, and these threats can typically be classified into one of these categories below:
A backdoor in a computer system, a cryptosystem or an algorithm, is any secret method of bypassing normal authentication or security controls. They may exist for a number of reasons, including by original design or from poor configuration. They may have been added by an authorized party to allow some legitimate access, or by an attacker for malicious reasons; but regardless of the motives for their existence, they create a vulnerability.
Denial of service attacks (DoS) are designed to make a machine or network resource unavailable to its intended users. [5] Attackers can deny service to individual victims, such as by deliberately entering a wrong password enough consecutive times to cause the victims account to be locked, or they may overload the capabilities of a machine or network and block all users at once. While a network attack from a single IP address can be blocked by adding a new firewall rule, many forms of Distributed denial of service (DDoS) attacks are possible, where the attack comes from a large number of points – and defending is much more difficult. Such attacks can originate from the zombie computers of a botnet, but a range of other techniques are possible including reflection and amplification attacks, where innocent systems are fooled into sending traffic to the victim.
An unauthorized user gaining physical access to a computer is most likely able to directly copy data from it. They may also compromise security by making operating system modifications, installing software worms, keyloggers, covert listening devices or using wireless mice. [6] Even when the system is protected by standard security measures, these may be able to be by-passed by booting another operating system or tool from a CD-ROM or other bootable media. Disk encryption and Trusted Platform Module are designed to prevent these attacks.
Eavesdropping is the act of surreptitiously listening to a private conversation, typically between hosts on a network. For instance, programs such as Carnivore and NarusInSight have been used by the FBI and NSA to eavesdrop on the systems of internet service providers. Even machines that operate as a closed system (i.e., with no contact to the outside world) can be eavesdropped upon via monitoring the faint electro-magnetic transmissions generated by the hardware; TEMPEST is a specification by the NSA referring to these attacks.
Spoofing is the act of masquerading as a valid entity through falsification of data (such as an IP address or username), in order to gain access to information or resources that one is otherwise unauthorized to obtain. [7] [8] There are several types of spoofing, including:
Tampering describes a malicious modification of products. So-called "Evil Maid" attacks and security services planting of surveillance capability into routers [10] are examples.
Privilege escalation describes a situation where an attacker with some level of restricted access is able to, without authorization, elevate their privileges or access level. For example, a standard computer user may be able to fool the system into giving them access to restricted data; or even to " become root" and have full unrestricted access to a system.
Phishing is the attempt to acquire sensitive information such as usernames, passwords, and credit card details directly from users. [11] Phishing is typically carried out by email spoofing or instant messaging, and it often directs users to enter details at a fake website whose look and feel are almost identical to the legitimate one. Preying on a victim's trust, phishing can be classified as a form of social engineering.
Clickjacking, also known as "UI redress attack" or "User Interface redress attack", is a malicious technique in which an attacker tricks a user into clicking on a button or link on another webpage while the user intended to click on the top level page. This is done using multiple transparent or opaque layers. The attacker is basically " hijacking" the clicks meant for the top level page and routing them to some other irrelevant page, most likely owned by someone else. A similar technique can be used to hijack keystrokes. Carefully drafting a combination of stylesheets, iframes, buttons and text boxes, a user can be led into believing that they are typing the password or other information on some authentic webpage while it is being channeled into an invisible frame controlled by the attacker.
Social engineering aims to convince a user to disclose secrets such as passwords, card numbers, etc. by, for example, impersonating a bank, a contractor, or a customer. [12]
A common scam involves fake CEO emails sent to accounting and finance departments. In early 2016, the FBI reported that the scam has cost US businesses more than $2bn in about two years. [13]
In May 2016, the Milwaukee Bucks NBA team was the victim of this type of cyber scam with a perpetrator impersonating the team's president Peter Feigin, resulting in the handover of all the team's employees' 2015 W-2 tax forms. [14]
Employee behavior can have a big impact on information security in organizations. Cultural concepts can help different segments of the organization work effectively or work against effectiveness towards information security within an organization.″Exploring the Relationship between Organizational Culture and Information Security Culture″ provides the following definition of information security culture: ″ISC is the totality of patterns of behavior in an organization that contribute to the protection of information of all kinds.″ [15]
Andersson and Reimers (2014) found that employees often do not see themselves as part of the organization Information Security "effort" and often take actions that ignore organizational Information Security best interests.[ citation needed] Research shows Information security culture needs to be improved continuously. In ″Information Security Culture from Analysis to Change″, authors commented, ″It′s a never ending process, a cycle of evaluation and change or maintenance.″ To manage the information security culture, five steps should be taken: Pre-evaluation, strategic planning, operative planning, implementation, and post-evaluation. [16]
The growth in the number of computer systems, and the increasing reliance upon them of individuals, businesses, industries and governments means that there are an increasing number of systems at risk.
The computer systems of financial regulators and financial institutions like the U.S. Securities and Exchange Commission, SWIFT, investment banks, and commercial banks are prominent hacking targets for cybercriminals interested in manipulating markets and making illicit gains. [17] Web sites and apps that accept or store credit card numbers, brokerage accounts, and bank account information are also prominent hacking targets, because of the potential for immediate financial gain from transferring money, making purchases, or selling the information on the black market. [18] In-store payment systems and ATMs have also been tampered with in order to gather customer account data and PINs.
Computers control functions at many utilities, including coordination of telecommunications, the power grid, nuclear power plants, and valve opening and closing in water and gas networks. The Internet is a potential attack vector for such machines if connected, but the Stuxnet worm demonstrated that even equipment controlled by computers not connected to the Internet can be vulnerable. In 2014, the Computer Emergency Readiness Team, a division of the Department of Homeland Security, investigated 79 hacking incidents at energy companies. [19] Vulnerabilities in smart meters (many of which use local radio or cellular communications) can cause problems with billing fraud. [20]
The aviation industry is very reliant on a series of complex systems which could be attacked. [21] A simple power outage at one airport can cause repercussions worldwide, [22] much of the system relies on radio transmissions which could be disrupted, [23] and controlling aircraft over oceans is especially dangerous because radar surveillance only extends 175 to 225 miles offshore. [24] There is also potential for attack from within an aircraft. [25]
In Europe, with the ( Pan-European Network Service) [26] and NewPENS, [27] and in the US with the NextGen program, [28] air navigation service providers are moving to create their own dedicated networks.
The consequences of a successful attack range from loss of confidentiality to loss of system integrity, air traffic control outages, loss of aircraft, and even loss of life.
Desktop computers and laptops are commonly targeted to gather passwords or financial account information, or to construct a botnet to attack another target. Smartphones, tablet computers, smart watches, and other mobile devices such as quantified self devices like activity trackers have sensors such as cameras, microphones, GPS receivers, compasses, and accelerometers which could be exploited, and may collect personal information, including sensitive health information. Wifi, Bluetooth, and cell phone networks on any of these devices could be used as attack vectors, and sensors might be remotely activated after a successful breach. [29]
The increasing number of home automation devices such as the Nest thermostat are also potential targets. [29]
Large corporations are common targets. In many cases this is aimed at financial gain through identity theft and involves data breaches such as the loss of millions of clients' credit card details by Home Depot, [30] Staples, [31] Target Corporation, [32] and the most recent breach of Equifax. [33]
Some cyberattacks are ordered by foreign governments, these governments engage in cyberwarfare with the intent to spread their propaganda, sabotage, or spy on their targets. Many people believe the Russian government played a major role in the US presidential election of 2016 by using Twitter and Facebook to affect the results of the election, despite the fact that no evidence has been found. [34]
Medical records have been targeted for use in general identify theft, health insurance fraud, and impersonating patients to obtain prescription drugs for recreational purposes or resale. [35] Although cyber threats continue to increase, 62% of all organizations did not increase security training for their business in 2015. [36] [37]
Not all attacks are financially motivated however; for example security firm HBGary Federal suffered a serious series of attacks in 2011 from hacktivist group Anonymous in retaliation for the firm's CEO claiming to have infiltrated their group, [38] [39] and in the Sony Pictures attack of 2014 the motive appears to have been to embarrass with data leaks, and cripple the company by wiping workstations and servers. [40] [41]
Vehicles are increasingly computerized, with engine timing, cruise control, anti-lock brakes, seat belt tensioners, door locks, airbags and advanced driver-assistance systems on many models. Additionally, connected cars may use WiFi and Bluetooth to communicate with onboard consumer devices and the cell phone network. [42] Self-driving cars are expected to be even more complex.
All of these systems carry some security risk, and such issues have gained wide attention. [43] [44] [45] Simple examples of risk include a malicious compact disc being used as an attack vector, [46] and the car's onboard microphones being used for eavesdropping. However, if access is gained to a car's internal controller area network, the danger is much greater [42] – and in a widely publicised 2015 test, hackers remotely carjacked a vehicle from 10 miles away and drove it into a ditch. [47] [48]
Manufacturers are reacting in a number of ways, with Tesla in 2016 pushing out some security fixes "over the air" into its cars' computer systems. [49]
In the area of autonomous vehicles, in September 2016 the United States Department of Transportation announced some initial safety standards, and called for states to come up with uniform policies. [50] [51]
Government and military computer systems are commonly attacked by activists [52] [53] [54] [55] and foreign powers. [56] [57] [58] [59] Local and regional government infrastructure such as traffic light controls, police and intelligence agency communications, personnel records, student records, [60] and financial systems are also potential targets as they are now all largely computerized. Passports and government ID cards that control access to facilities which use RFID can be vulnerable to cloning.
The Internet of things (IoT) is the network of physical objects such as devices, vehicles, and buildings that are embedded with electronics, software, sensors, and network connectivity that enables them to collect and exchange data [61] – and concerns have been raised that this is being developed without appropriate consideration of the security challenges involved. [62] [63]
While the IoT creates opportunities for more direct integration of the physical world into computer-based systems, [64] [65] it also provides opportunities for misuse. In particular, as the Internet of Things spreads widely, cyber attacks are likely to become an increasingly physical (rather than simply virtual) threat. [66] If a front door's lock is connected to the Internet, and can be locked/unlocked from a phone, then a criminal could enter the home at the press of a button from a stolen or hacked phone. People could stand to lose much more than their credit card numbers in a world controlled by IoT-enabled devices. Thieves have also used electronic means to circumvent non-Internet-connected hotel door locks. [67]
Medical devices have either been successfully attacked or had potentially deadly vulnerabilities demonstrated, including both in-hospital diagnostic equipment [68] and implanted devices including pacemakers [69] and insulin pumps. [70] There are many reports of hospitals and hospital organizations getting hacked, including ransomware attacks, [71] [72] [73] [74] Windows XP exploits, [75] [76] viruses, [77] [78] [79] and data breaches of sensitive data stored on hospital servers. [80] [72] [81] [82] [83] On 28 December 2016 the US Food and Drug Administration released its recommendations for how medical device manufacturers should maintain the security of Internet-connected devices – but no structure for enforcement. [84] [85]
Serious financial damage has been caused by security breaches, but because there is no standard model for estimating the cost of an incident, the only data available is that which is made public by the organizations involved. "Several computer security consulting firms produce estimates of total worldwide losses attributable to virus and worm attacks and to hostile digital acts in general. The 2003 loss estimates by these firms range from $13 billion (worms and viruses only) to $226 billion (for all forms of covert attacks). The reliability of these estimates is often challenged; the underlying methodology is basically anecdotal." [86] Security breaches continue to cost businesses billions of dollars but a survey revealed that 66% of security staffs do not believe senior leadership takes cyber precautions as a strategic priority. [36][ third-party source needed]
However, reasonable estimates of the financial cost of security breaches can actually help organizations make rational investment decisions. According to the classic Gordon-Loeb Model analyzing the optimal investment level in information security, one can conclude that the amount a firm spends to protect information should generally be only a small fraction of the expected loss (i.e., the expected value of the loss resulting from a cyber/information security breach). [87]
As with physical security, the motivations for breaches of computer security vary between attackers. Some are thrill-seekers or vandals, some are activists, others are criminals looking for financial gain. State-sponsored attackers are now common and well resourced, but started with amateurs such as Markus Hess who hacked for the KGB, as recounted by Clifford Stoll, in The Cuckoo's Egg.
A standard part of threat modelling for any particular system is to identify what might motivate an attack on that system, and who might be motivated to breach it. The level and detail of precautions will vary depending on the system to be secured. A home personal computer, bank, and classified military network face very different threats, even when the underlying technologies in use are similar.
In computer security a countermeasure is an action, device, procedure, or technique that reduces a threat, a vulnerability, or an attack by eliminating or preventing it, by minimizing the harm it can cause, or by discovering and reporting it so that corrective action can be taken. [88] [89] [90]
Some common countermeasures are listed in the following sections:
Security by design, or alternately secure by design, means that the software has been designed from the ground up to be secure. In this case, security is considered as a main feature.
Some of the techniques in this approach include:
The Open Security Architecture organization defines IT security architecture as "the design artifacts that describe how the security controls (security countermeasures) are positioned, and how they relate to the overall information technology architecture. These controls serve the purpose to maintain the system's quality attributes: confidentiality, integrity, availability, accountability and assurance services". [91]
Techopedia defines security architecture as "a unified security design that addresses the necessities and potential risks involved in a certain scenario or environment. It also specifies when and where to apply security controls. The design process is generally reproducible." The key attributes of security architecture are: [92]
A state of computer "security" is the conceptual ideal, attained by the use of the three processes: threat prevention, detection, and response. These processes are based on various policies and system components, which include the following:
Today, computer security comprises mainly "preventive" measures, like firewalls or an exit procedure. A firewall can be defined as a way of filtering network data between a host or a network and another network, such as the Internet, and can be implemented as software running on the machine, hooking into the network stack (or, in the case of most UNIX-based operating systems such as Linux, built into the operating system kernel) to provide real time filtering and blocking. Another implementation is a so-called "physical firewall", which consists of a separate machine filtering network traffic. Firewalls are common amongst machines that are permanently connected to the Internet.
Some organizations are turning to big data platforms, such as Apache Hadoop, to extend data accessibility and machine learning to detect advanced persistent threats. [93] [94]
However, relatively few organisations maintain computer systems with effective detection systems, and fewer still have organised response mechanisms in place. As a result, as Reuters points out: "Companies for the first time report they are losing more through electronic theft of data than physical stealing of assets". [95] The primary obstacle to effective eradication of cyber crime could be traced to excessive reliance on firewalls and other automated "detection" systems. Yet it is basic evidence gathering by using packet capture appliances that puts criminals behind bars.[ citation needed]
Vulnerability management is the cycle of identifying, and remediating or mitigating vulnerabilities, [96] especially in software and firmware. Vulnerability management is integral to computer security and network security.
Vulnerabilities can be discovered with a vulnerability scanner, which analyzes a computer system in search of known vulnerabilities, [97] such as open ports, insecure software configuration, and susceptibility to malware.
Beyond vulnerability scanning, many organisations contract outside security auditors to run regular penetration tests against their systems to identify vulnerabilities. In some sectors this is a contractual requirement. [98]
While formal verification of the correctness of computer systems is possible, [99] [100] it is not yet common. Operating systems formally verified include seL4, [101] and SYSGO's PikeOS [102] [103] – but these make up a very small percentage of the market.
Cryptography properly implemented is now virtually impossible to directly break. Breaking them requires some non-cryptographic input, such as a stolen key, stolen plaintext (at either end of the transmission), or some other extra cryptanalytic information.
Two factor authentication is a method for mitigating unauthorized access to a system or sensitive information. It requires "something you know"; a password or PIN, and "something you have"; a card, dongle, cellphone, or other piece of hardware. This increases security as an unauthorized person needs both of these to gain access. The more tight we are on security measures, the less unauthorized hacks there will be.
Social engineering and direct computer access (physical) attacks can only be prevented by non-computer means, which can be difficult to enforce, relative to the sensitivity of the information. Training is often involved to help mitigate this risk, but even in a highly disciplined environments (e.g. military organizations), social engineering attacks can still be difficult to foresee and prevent.
Enoculation, derived from inoculation theory, seeks to prevent social engineering and other fraudulent tricks or traps by instilling a resistance to persuasion attempts through exposure to similar or related attempts. [104]
It is possible to reduce an attacker's chances by keeping systems up to date with security patches and updates, using a security scanner or/and hiring competent people responsible for security. The effects of data loss/damage can be reduced by careful backing up and insurance.
While hardware may be a source of insecurity, such as with microchip vulnerabilities maliciously introduced during the manufacturing process, [105] [106] hardware-based or assisted computer security also offers an alternative to software-only computer security. Using devices and methods such as dongles, trusted platform modules, intrusion-aware cases, drive locks, disabling USB ports, and mobile-enabled access may be considered more secure due to the physical access (or sophisticated backdoor access) required in order to be compromised. Each of these is covered in more detail below.
One use of the term "computer security" refers to technology that is used to implement secure operating systems. In the 1980s the United States Department of Defense (DoD) used the "Orange Book" [115] standards, but the current international standard ISO/IEC 15408, " Common Criteria" defines a number of progressively more stringent Evaluation Assurance Levels. Many common operating systems meet the EAL4 standard of being "Methodically Designed, Tested and Reviewed", but the formal verification required for the highest levels means that they are uncommon. An example of an EAL6 ("Semiformally Verified Design and Tested") system is Integrity-178B, which is used in the Airbus A380 [116] and several military jets. [117]
In software engineering, secure coding aims to guard against the accidental introduction of security vulnerabilities. It is also possible to create software designed from the ground up to be secure. Such systems are " secure by design". Beyond this, formal verification aims to prove the correctness of the algorithms underlying a system; [118] important for cryptographic protocols for example.
Within computer systems, two of many security models capable of enforcing privilege separation are access control lists (ACLs) and capability-based security. Using ACLs to confine programs has been proven to be insecure in many situations, such as if the host computer can be tricked into indirectly allowing restricted file access, an issue known as the confused deputy problem. It has also been shown that the promise of ACLs of giving access to an object to only one person can never be guaranteed in practice. Both of these problems are resolved by capabilities. This does not mean practical flaws exist in all ACL-based systems, but only that the designers of certain utilities must take responsibility to ensure that they do not introduce flaws.[ citation needed]
Capabilities have been mostly restricted to research operating systems, while commercial OSs still use ACLs. Capabilities can, however, also be implemented at the language level, leading to a style of programming that is essentially a refinement of standard object-oriented design. An open source project in the area is the E language.
Repeated education/training in security "best practices" can have a marked effect on compliance with good end user network security habits—which particularly protect against phishing, ransomware and other forms of malware which have a social engineering aspect. [119]
Responding forcefully to attempted security breaches (in the manner that one would for attempted physical security breaches) is often very difficult for a variety of reasons:
Some illustrative examples of different types of computer security breaches are given below.
In 1988, only 60,000 computers were connected to the Internet, and most were mainframes, minicomputers and professional workstations. On 2 November 1988, many started to slow down, because they were running a malicious code that demanded processor time and that spread itself to other computers – the first internet " computer worm". [121] The software was traced back to 23-year-old Cornell University graduate student Robert Tappan Morris, Jr. who said 'he wanted to count how many machines were connected to the Internet'. [121]
In 1994, over a hundred intrusions were made by unidentified crackers into the Rome Laboratory, the US Air Force's main command and research facility. Using trojan horses, hackers were able to obtain unrestricted access to Rome's networking systems and remove traces of their activities. The intruders were able to obtain classified files, such as air tasking order systems data and furthermore able to penetrate connected networks of National Aeronautics and Space Administration's Goddard Space Flight Center, Wright-Patterson Air Force Base, some Defense contractors, and other private sector organizations, by posing as a trusted Rome center user. [122]
In early 2007, American apparel and home goods company TJX announced that it was the victim of an unauthorized computer systems intrusion [123] and that the hackers had accessed a system that stored data on credit card, debit card, check, and merchandise return transactions. [124]
The computer worm known as Stuxnet reportedly ruined almost one-fifth of Iran's nuclear centrifuges [125] by disrupting industrial programmable logic controllers (PLCs) in a targeted attack generally believed to have been launched by Israel and the United States [126] [127] [128] [129] – although neither has publicly admitted this.
In early 2013, documents provided by Edward Snowden were published by The Washington Post and The Guardian [130] [131] exposing the massive scale of NSA global surveillance. There were also indications that the NSA may have inserted a backdoor in a NIST standard for encryption [132]. This standard was later withdrawn due to widespread criticism [133]. The NSA additionally were revealed to have tapped the links between Google's data centres. [134]
In 2013 and 2014, a Russian/ Ukrainian hacking ring known as "Rescator" broke into Target Corporation computers in 2013, stealing roughly 40 million credit cards, [135] and then Home Depot computers in 2014, stealing between 53 and 56 million credit card numbers. [136] Warnings were delivered at both corporations, but ignored; physical security breaches using self checkout machines are believed to have played a large role. "The malware utilized is absolutely unsophisticated and uninteresting," says Jim Walter, director of threat intelligence operations at security technology company McAfee – meaning that the heists could have easily been stopped by existing antivirus software had administrators responded to the warnings. The size of the thefts has resulted in major attention from state and Federal United States authorities and the investigation is ongoing.
In April 2015, the Office of Personnel Management discovered it had been hacked more than a year earlier in a data breach, resulting in the theft of approximately 21.5 million personnel records handled by the office. [137] The Office of Personnel Management hack has been described by federal officials as among the largest breaches of government data in the history of the United States. [138] Data targeted in the breach included personally identifiable information such as Social Security Numbers, [139] names, dates and places of birth, addresses, and fingerprints of current and former government employees as well as anyone who had undergone a government background check. [140] It is believed the hack was perpetrated by Chinese hackers but the motivation remains unclear. [141]
In July 2015, a hacker group known as "The Impact Team" successfully breached the extramarital relationship website Ashley Madison. The group claimed that they had taken not only company data but user data as well. After the breach, The Impact Team dumped emails from the company's CEO, to prove their point, and threatened to dump customer data unless the website was taken down permanently. With this initial data release, the group stated " Avid Life Media has been instructed to take Ashley Madison and Established Men offline permanently in all forms, or we will release all customer records, including profiles with all the customers' secret sexual fantasies and matching credit card transactions, real names and addresses, and employee documents and emails. The other websites may stay online." [142] When Avid Life Media, the parent company that created the Ashley Madison website, did not take the site offline, The Impact Group released two more compressed files, one 9.7GB and the second 20GB. After the second data dump, Avid Life Media CEO Noel Biderman resigned, but the website remained functional.
Conflict of laws in cyberspace has become a major cause of concern for computer security community. Some of the main challenges and complaints about the antivirus industry are the lack of global web regulations, a global base of common rules to judge, and eventually punish, cyber crimes and cyber criminals. There is no global cyber law and cyber security treaty that can be invoked for enforcing global cyber security issues.
International legal issues of cyber attacks are complicated in nature. Even if an antivirus firm locates the cybercriminal behind the creation of a particular virus or piece of malware or form of cyber attack, often the local authorities cannot take action due to lack of laws under which to prosecute. [143] [144] Authorship attribution for cyber crimes and cyber attacks is a major problem for all law enforcement agencies.
"[Computer viruses] switch from one country to another, from one jurisdiction to another – moving around the world, using the fact that we don't have the capability to globally police operations like this. So the Internet is as if someone [had] given free plane tickets to all the online criminals of the world." [143] Use of dynamic DNS, fast flux and bullet proof servers have added own complexities to this situation.
The role of the government is to make regulations to force companies and organizations to protect their systems, infrastructure and information from any cyberattacks, but also to protect its own national infrastructure such as the national power-grid. [145]
The question of whether the government should intervene or not in the regulation of the cyberspace is a very polemical one. Indeed, for as long as it has existed and by definition, the cyberspace is a virtual space free of any government intervention. Where everyone agrees that an improvement on cyber security is more than vital, is the government the best actor to solve this issue? Many government officials and experts think that the government should step in and that there is a crucial need for regulation, mainly due to the failure of the private sector to solve efficiently the cybersecurity problem. R. Clarke said during a panel discussion at the RSA Security Conference in San Francisco, he believes that the "industry only responds when you threaten regulation. If the industry doesn't respond (to the threat), you have to follow through." [146] On the other hand, executives from the private sector agree that improvements are necessary, but think that the government intervention would affect their ability to innovate efficiently.
Many different teams and organisations exist, including:
CSIRTs in Europe collaborate in the TERENA task force TF-CSIRT. TERENA's Trusted Introducer service provides an accreditation and certification scheme for CSIRTs in Europe. A full list of known CSIRTs in Europe is available from the Trusted Introducer website.
Most countries have their own computer emergency response team to protect network security.
On 3 October 2010, Public Safety Canada unveiled Canada's Cyber Security Strategy, following a Speech from the Throne commitment to boost the security of Canadian cyberspace. [152] [153] The aim of the strategy is to strengthen Canada's "cyber systems and critical infrastructure sectors, support economic growth and protect Canadians as they connect to each other and to the world." [153] Three main pillars define the strategy: securing government systems, partnering to secure vital cyber systems outside the federal government, and helping Canadians to be secure online. [153] The strategy involves multiple departments and agencies across the Government of Canada. [154] The Cyber Incident Management Framework for Canada outlines these responsibilities, and provides a plan for coordinated response between government and other partners in the event of a cyber incident. [155] The Action Plan 2010–2015 for Canada's Cyber Security Strategy outlines the ongoing implementation of the strategy. [156]
Public Safety Canada's Canadian Cyber Incident Response Centre (CCIRC) is responsible for mitigating and responding to threats to Canada's critical infrastructure and cyber systems. The CCIRC provides support to mitigate cyber threats, technical support to respond and recover from targeted cyber attacks, and provides online tools for members of Canada's critical infrastructure sectors. [157] The CCIRC posts regular cyber security bulletins on the Public Safety Canada website. [158] The CCIRC also operates an online reporting tool where individuals and organizations can report a cyber incident. [159] Canada's Cyber Security Strategy is part of a larger, integrated approach to critical infrastructure protection, and functions as a counterpart document to the National Strategy and Action Plan for Critical Infrastructure. [154]
On 27 September 2010, Public Safety Canada partnered with STOP.THINK.CONNECT, a coalition of non-profit, private sector, and government organizations dedicated to informing the general public on how to protect themselves online. [160] On 4 February 2014, the Government of Canada launched the Cyber Security Cooperation Program. [161] The program is a $1.5 million five-year initiative aimed at improving Canada's cyber systems through grants and contributions to projects in support of this objective. [162] Public Safety Canada aims to begin an evaluation of Canada's Cyber Security Strategy in early 2015. [154] Public Safety Canada administers and routinely updates the GetCyberSafe portal for Canadian citizens, and carries out Cyber Security Awareness Month during October. [163]
China's Central Leading Group for Internet Security and Informatization ( Chinese: 中央网络安全和信息化领导小组) was established on 27 February 2014. This Leading Small Group (LSG) of the Communist Party of China is headed by General Secretary Xi Jinping himself and is staffed with relevant Party and state decision-makers. The LSG was created to overcome the incoherent policies and overlapping responsibilities that characterized China's former cyberspace decision-making mechanisms. The LSG oversees policy-making in the economic, political, cultural, social and military fields as they relate to network security and IT strategy. This LSG also coordinates major policy initiatives in the international arena that promote norms and standards favored by the Chinese government and that emphasize the principle of national sovereignty in cyberspace. [164]
Berlin starts National Cyber Defense Initiative: On 16 June 2011, the German Minister for Home Affairs, officially opened the new German NCAZ (National Center for Cyber Defense) Nationales Cyber-Abwehrzentrum located in Bonn. The NCAZ closely cooperates with BSI (Federal Office for Information Security) Bundesamt für Sicherheit in der Informationstechnik, BKA (Federal Police Organisation) Bundeskriminalamt (Deutschland), BND (Federal Intelligence Service) Bundesnachrichtendienst, MAD (Military Intelligence Service) Amt für den Militärischen Abschirmdienst and other national organisations in Germany taking care of national security aspects. According to the Minister the primary task of the new organization founded on 23 February 2011, is to detect and prevent attacks against the national infrastructure and mentioned incidents like Stuxnet.
Some provisions for cyber security have been incorporated into rules framed under the Information Technology Act 2000. [165]
The National Cyber Security Policy 2013 is a policy framework by Ministry of Electronics and Information Technology (MeitY) which aims to protect the public and private infrastructure from cyber attacks, and safeguard "information, such as personal information (of web users), financial and banking information and sovereign data". CERT- In is the nodal agency which monitors the cyber threats in the country. The post of National Cyber Security Coordinator has also been created in the Prime Minister's Office (PMO).
The Indian Companies Act 2013 has also introduced cyber law and cyber security obligations on the part of Indian directors. Some provisions for cyber security have been incorporated into rules framed under the Information Technology Act 2000 Update in 2013. [166]
O CNCS em Portugal promove a utilização do ciberespaço de uma forma livre, confiável e segura, através da melhoria contínua da cibersegurança nacional e da cooperação internacional. — Cyber Security Services, Nano IT Security is a Portuguese company specialized in cyber security, pentesting and vulnerability analyses.
Cyber-crime has risen rapidly in Pakistan. There are about 34 million Internet users with 133.4 million mobile subscribers in Pakistan. According to Cyber Crime Unit (CCU), a branch of Federal Investigation Agency, only 62 cases were reported to the unit in 2007, 287 cases in 2008, ratio dropped in 2009 but in 2010, more than 312 cases were registered. However, there are many unreported incidents of cyber-crime. [167]
"Pakistan's Cyber Crime Bill 2007", the first pertinent law, focuses on electronic crimes, for example cyber-terrorism, criminal access, electronic system fraud, electronic forgery, and misuse of encryption. [167]
National Response Centre for Cyber Crime (NR3C) – FIA is a law enforcement agency dedicated to fighting cyber crime. Inception of this Hi-Tech crime fighting unit transpired in 2007 to identify and curb the phenomenon of technological abuse in society. [168] However, certain private firms are also working in cohesion with the government to improve cyber security and curb cyber attacks. [169]
Following cyber attacks in the first half of 2013, when the government, news media, television station, and bank websites were compromised, the national government committed to the training of 5,000 new cybersecurity experts by 2017. The South Korean government blamed its northern counterpart for these attacks, as well as incidents that occurred in 2009, 2011, [170] and 2012, but Pyongyang denies the accusations. [171]
The 1986 18 U.S.C. § 1030, more commonly known as the Computer Fraud and Abuse Act is the key legislation. It prohibits unauthorized access or damage of "protected computers" as defined in .
Although various other measures have been proposed, such as the "Cybersecurity Act of 2010 – S. 773" in 2009, the "International Cybercrime Reporting and Cooperation Act – H.R.4962" [172] and "Protecting Cyberspace as a National Asset Act of 2010 – S.3480" [173] in 2010 – none of these has succeeded.
Executive order 13636 Improving Critical Infrastructure Cybersecurity was signed 12 February 2013.
The Department of Homeland Security has a dedicated division responsible for the response system, risk management program and requirements for cybersecurity in the United States called the National Cyber Security Division. [174] [175] The division is home to US-CERT operations and the National Cyber Alert System. [175] The National Cybersecurity and Communications Integration Center brings together government organizations responsible for protecting computer networks and networked infrastructure. [176]
The third priority of the Federal Bureau of Investigation (FBI) is to: "Protect the United States against cyber-based attacks and high-technology crimes", [177] and they, along with the National White Collar Crime Center (NW3C), and the Bureau of Justice Assistance (BJA) are part of the multi-agency task force, The Internet Crime Complaint Center, also known as IC3. [178]
In addition to its own specific duties, the FBI participates alongside non-profit organizations such as InfraGard. [179] [180]
In the criminal division of the United States Department of Justice operates a section called the Computer Crime and Intellectual Property Section. The CCIPS is in charge of investigating computer crime and intellectual property crime and is specialized in the search and seizure of digital evidence in computers and networks. [181]
The United States Cyber Command, also known as USCYBERCOM, is tasked with the defense of specified Department of Defense information networks and ensures "the security, integrity, and governance of government and military IT infrastructure and assets" [182] It has no role in the protection of civilian networks. [183] [184]
The U.S. Federal Communications Commission's role in cybersecurity is to strengthen the protection of critical communications infrastructure, to assist in maintaining the reliability of networks during disasters, to aid in swift recovery after, and to ensure that first responders have access to effective communications services. [185]
The Food and Drug Administration has issued guidance for medical devices, [186] and the National Highway Traffic Safety Administration [187] is concerned with automotive cybersecurity. After being criticized by the Government Accountability Office, [188] and following successful attacks on airports and claimed attacks on airplanes, the Federal Aviation Administration has devoted funding to securing systems on board the planes of private manufacturers, and the Aircraft Communications Addressing and Reporting System. [189] Concerns have also been raised about the future Next Generation Air Transportation System. [190]
" Computer emergency response team" is a name given to expert groups that handle computer security incidents. In the US, two distinct organization exist, although they do work closely together.
There is growing concern that cyberspace will become the next theater of warfare. As Mark Clayton from the Christian Science Monitor described in an article titled "The New Cyber Arms Race":
In the future, wars will not just be fought by soldiers with guns or with planes that drop bombs. They will also be fought with the click of a mouse a half a world away that unleashes carefully weaponized computer programs that disrupt or destroy critical industries like utilities, transportation, communications, and energy. Such attacks could also disable military networks that control the movement of troops, the path of jet fighters, the command and control of warships. [192]
This has led to new terms such as cyberwarfare and cyberterrorism. The United States Cyber Command was created in 2009 [193] and many other countries have similar forces.
Cybersecurity is a fast-growing field of IT concerned with reducing organizations' risk of hack or data breach. [194] According to research from the Enterprise Strategy Group, 46% of organizations say that they have a "problematic shortage" of cybersecurity skills in 2016, up from 28% in 2015. [195] Commercial, government and non-governmental organizations all employ cybersecurity professionals. The fastest increases in demand for cybersecurity workers are in industries managing increasing volumes of consumer data such as finance, health care, and retail. [196] However, the use of the term "cybersecurity" is more prevalent in government job descriptions. [197]
Typical cyber security job titles and descriptions include: [198]
Student programs are also available to people interested in beginning a career in cybersecurity. [199] [200] Meanwhile, a flexible and effective option for information security professionals of all experience levels to keep studying is online security training, including webcasts. [201] [202] [203] A wide range of certified courses are also available. [204]
In the United Kingdom, a nationwide set of cyber security forums, known as the U.K Cyber Security Forum, were established supported by the Government's cyber security strategy [205] in order to encourage start-ups and innovation and to address the skills gap [206] identified by the U.K Government.
The following terms used with regards to engineering secure systems are explained below.
A cyberattack is any type of offensive maneuver that targets computer information systems, infrastructures, computer networks, or personal computer devices. A cyberattack employed by nation-states, individuals, groups, society or organizations. A cyberattack may originate from an anonymous source. A cyberattack may steal, alters, or destroy a specified target by hacking into a susceptible system. [210]
In computers and computer networks an attack is any attempt to expose, alter, disable, destroy, steal or gain unauthorized access to or make unauthorized use of an Asset. [211]
Cyber attacks can be labelled as either a cyber campaign, cyberwarfare or cyberterrorism in different context. Cyberattacks can range from installing spyware on a personal computer to attempt to destroy the infrastructure of entire nations. Cyberattacks have become increasingly sophisticated and dangerous as the Stuxnet worm recently demonstrated. [212]
User behavior analytics and SIEM are used to prevent these attacks.
Legal experts are seeking to limit use of the term to incidents causing physical damage, distinguishing it from the more routine data breaches and broader hacking activities. [213]
Internet Engineering Task Force defines attack in RFC 2828 as: [88]
CNSS Instruction No. 4009 dated 26 April 2010 by Committee on National Security Systems of United States of America [89] defines an attack as:
The increasing dependencies of modern society on information and computers networks (both in private and public sectors, including military) [214] [215] [216] has led to new terms like cyber attack and cyberwarfare.
CNSS Instruction No. 4009 [89] define a cyber attack as:
Cyberwarfare utilizes techniques of defending and attacking information and computer networks that inhabit cyberspace, often through a prolonged cyber campaign or series of related campaigns. It denies an opponent's ability to do the same, while employing technological instruments of war to attack an opponent's critical computer systems. Cyberterrorism, on the other hand, is "the use of computer network tools to shut down critical national infrastructures (such as energy, transportation, government operations) or to coerce or intimidate a government or civilian population". [217] That means the end result of both cyberwarfare and cyberterrorism is the same, to damage critical infrastructures and computer systems linked together within the confines of cyberspace.
This section needs additional citations for
verification. (July 2014) |
Three factors contribute to why cyber-attacks are launched against a state or an individual: the fear factor, spectacularity factor, and vulnerability factor.
The spectacularity factor is a measure of the actual damage achieved by an attack, meaning that the attack creates direct losses (usually loss of availability or loss of income) and garners negative publicity. On February 8, 2000, a Denial of Service attack severely reduced traffic to many major sites, including Amazon, Buy.com, CNN, and eBay (the attack continued to affect still other sites the next day). [218] Amazon reportedly estimated the loss of business at $600,000. [218]
Vulnerability factor exploits how vulnerable an organization or government establishment is to cyber-attacks. An organization can be vulnerable to a denial of service attack, and a government establishment can be defaced on a web page. A computer network attack disrupts the integrity or authenticity of data, usually through malicious code that alters program logic that controls data, leading to errors in output. [219]
![]() | This section possibly contains
original research. (March 2015) |
Professional hackers, either working on their own or employed by the government or military service, can find computer systems with vulnerabilities lacking the appropriate security software. Once found, they can infect systems with malicious code and then remotely control the system or computer by sending commands to view content or to disrupt other computers. There needs to be a pre-existing system flaw within the computer such as no antivirus protection or faulty system configuration for the viral code to work. Many professional hackers will promote themselves to cyberterrorists where a new set of rules govern their actions. Cyberterrorists have premeditated plans and their attacks are not born of rage. They need to develop their plans step-by-step and acquire the appropriate software to carry out an attack. They usually have political agendas, targeting political structures. Cyber terrorists are hackers with a political motivation, their attacks can impact political structure through this corruption and destruction. [220] They also target civilians, civilian interests and civilian installations. As previously stated cyberterrorists attack persons or property and cause enough harm to generate fear.
An attack can be active or passive. [88]
An attack can be perpetrated by an insider or from outside the organization; [88]
The term "attack" relates to some other basic security terms as shown in the following diagram: [88]
+ - - - - - - - - - - - - + + - - - - + + - - - - - - - - - - -+ | An Attack: | |Counter- | | A System Resource: | | i.e., A Threat Action | | measure | | Target of the Attack | | +----------+ | | | | +-----------------+ | | | Attacker |<==================||<========= | | | | i.e., | Passive | | | | | Vulnerability | | | | A Threat |<=================>||<========> | | | | Agent | or Active | | | | +-------|||-------+ | | +----------+ Attack | | | | VVV | | | | | | Threat Consequences | + - - - - - - - - - - - - + + - - - - + + - - - - - - - - - - -+
A resource (both physical or logical), called an asset, can have one or more vulnerabilities that can be exploited by a threat agent in a threat action. As a result, the confidentiality, integrity or availability of resources may be compromised. Potentially, the damage may extend to resources in addition to the one initially identified as vulnerable, including further resources of the organization, and the resources of other involved parties (customers, suppliers).
The so-called CIA triad is the basis of information security.
The attack can be active when it attempts to alter system resources or affect their operation: so it compromises integrity or availability. A "passive attack" attempts to learn or make use of information from the system but does not affect system resources: so it compromises confidentiality.
A threat is a potential for violation of security, which exists when there is a circumstance, capability, action, or event that could breach security and cause harm. That is, a threat is a possible danger that might exploit a vulnerability. A threat can be either "intentional" (i.e., intelligent; e.g., an individual cracker or a criminal organization) or "accidental" (e.g., the possibility of a computer malfunctioning, or the possibility of an "act of God" such as an earthquake, a fire, or a tornado). [88]
A set of policies concerned with information security management, the information security management systems (ISMS), has been developed to manage, according to risk management principles, the countermeasures in order to accomplish to a security strategy set up following rules and regulations applicable in a country. [221]
An attack should led to a security incident i.e. a security event that involves a security violation. In other words, a security-relevant system event in which the system's security policy is disobeyed or otherwise breached.
The overall picture represents the risk factors of the risk scenario. [222]
An organization should make steps to detect, classify and manage security incidents. The first logical step is to set up an incident response plan and eventually a computer emergency response team.
In order to detect attacks, a number of countermeasures can be set up at organizational, procedural and technical levels. Computer emergency response team, information technology security audit and intrusion detection system are example of these. [223]
An attack usually is perpetrated by someone with bad intentions: black hatted attacks falls in this category, while other perform penetration testing on an organization information system to find out if all foreseen controls are in place.
The attacks can be classified according to their origin: i.e. if it is conducted using one or more computers: in the last case is called a distributed attack. Botnets are used to conduct distributed attacks.
Other classifications are according to the procedures used or the type of vulnerabilities exploited: attacks can be concentrated on network mechanisms or host features.
Some attacks are physical: i.e. theft or damage of computers and other equipment. Others are attempts to force changes in the logic used by computers or network protocols in order to achieve unforeseen (by the original designer) result but useful for the attacker. Software used to for logical attacks on computers is called malware.
The following is a partial short list of attacks:
In detail, there are a number of techniques to utilize in cyber-attacks and a variety of ways to administer them to individuals or establishments on a broader scale. Attacks are broken down into two categories: syntactic attacks and semantic attacks. Syntactic attacks are straightforward; it is considered malicious software which includes viruses, worms, and Trojan horses.
A virus is a self-replicating program that can attach itself to another program or file in order to reproduce. The virus can hide in unlikely locations in the memory of a computer system and attach itself to whatever file it sees fit to execute its code. It can also change its digital footprint each time it replicates making it harder to track down in the computer.
A worm does not need another file or program to copy itself; it is a self-sustaining running program. Worms replicate over a network using protocols. The latest incarnation of worms make use of known vulnerabilities in systems to penetrate, execute their code, and replicate to other systems such as the Code Red II worm that infected more than 259 000 systems in less than 14 hours. [225] On a much larger scale, worms can be designed for industrial espionage to monitor and collect server and traffic activities then transmit it back to its creator.
A Trojan horse is designed to perform legitimate tasks but it also performs unknown and unwanted activity. It can be the basis of many viruses and worms installing onto the computer as keyboard loggers and backdoor software. In a commercial sense, Trojans can be imbedded in trial versions of software and can gather additional intelligence about the target without the person even knowing it happening. All three of these are likely to attack an individual and establishment through emails, web browsers, chat clients, remote software, and updates.
Semantic attack is the modification and dissemination of correct and incorrect information. Information modified could have been done without the use of computers even though new opportunities can be found by using them. To set someone into the wrong direction or to cover your tracks, the dissemination of incorrect information can be utilized.
In the conflict between Israel and the Palestinian Authority cyber attacks were conducted in October 2000 when Israeli hackers launched DOS attacks on computers owned by Palestinian resistance organizations (Hamas) and Lebanese resistance organizations (Hezbullah). Anti-Israel hackers responded by crashing several Israeli web sites by flooding them with bogus traffic. [220]
There were two such instances between India and Pakistan that involved cyberspace conflicts, started in 1990s. Earlier cyber attacks came to known as early as in 1999. [220] Since then, India and Pakistan were engaged in a long-term dispute over Kashmir which moved into cyberspace. Historical accounts indicated that each country's hackers have been repeatedly involved in attacking each other's computing database system. The number of attacks has grown yearly: 45 in 1999, 133 in 2000, 275 by the end of August 2001. [220] In 2010, Indian hackers laid a cyber attack at least 36 government database websites going by the name "Indian Cyber Army". [226] In 2013, Indian hackers hacked the official website of Election Commission of Pakistan in an attempt to retrieve sensitive database information. [227] In retaliation, Pakistani hackers, calling themselves "True Cyber Army" hacked and defaced ~1,059 websites of Indian election bodies. [227]
According to the media, Pakistan's has been working on effective cyber security system, in a program called the "Cyber Secure Pakistan" (CSP). [228] The program was launched in April 2013 by Pakistan Information Security Association and the program as expanded to country's universities.
Within cyberwarfare, the individual must recognize the state actors involved in committing these cyber-attacks against one another. The two predominant players that will be discussed is the age-old comparison of East versus West, China's cyber capabilities compared to United States' capabilities. There are many other state and non-state actors involved in cyberwarfare, such as Russia, Iran, Iraq, and Al Qaeda; since China and the U.S. are leading the foreground in cyberwarfare capabilities, they will be the only two state actors discussed.
But in Q2 2013, Akamai Technologies reported that Indonesia toppled China with portion 38 percent of cyber attack, a high increase from 21 percent portion in previous quarter. China set 33 percent and US set at 6.9 percent. 79 percent of attack came from Asia Pacific region. Indonesia dominated the attacking to ports 80 and 443 by about 90 percent. [229]
This section, except for one footnote, needs additional citations for
verification. (July 2013) |
China's People's Liberation Army (PLA) has developed a strategy called "Integrated Network Electronic Warfare" which guides computer network operations and cyberwarfare tools. This strategy helps link together network warfare tools and electronic warfare weapons against an opponent's information systems during conflict. They believe the fundamentals for achieving success is about seizing control of an opponent's information flow and establishing information dominance. [230] The Science of Military and The Science of Campaigns both identify enemy logistics systems networks as the highest priority for cyber-attacks and states that cyberwarfare must mark the start if a campaign, used properly, can enable overall operational success. [230] Focusing on attacking the opponent's infrastructure to disrupt transmissions and processes of information that dictate decision-making operations, the PLA would secure cyber dominance over their adversary. The predominant techniques that would be utilized during a conflict to gain the upper hand are as follows, the PLA would strike with electronic jammers, electronic deception and suppression techniques to interrupt the transfer processes of information. They would launch virus attacks or hacking techniques to sabotage information processes, all in the hopes of destroying enemy information platforms and facilities. The PLA's Science of Campaigns noted that one role for cyberwarfare is to create windows of opportunity for other forces to operate without detection or with a lowered risk of counterattack by exploiting the enemy's periods of "blindness", "deafness" or "paralysis" created by cyber-attacks. [230] That is one of the main focal points of cyberwarefare, to be able to weaken your enemy to the full extent possible so that your physical offensive will have a higher percentage of success.
The PLA conduct regular training exercises in a variety of environments emphasizing the use of cyberwarfare tactics and techniques in countering such tactics if it is employed against them. Faculty research has been focusing on designs for rootkit usage and detection for their Kylin Operating System which helps to further train these individuals' cyberwarfare techniques. China perceives cyberwarfare as a deterrent to nuclear weapons, possessing the ability for greater precision, leaving fewer casualties, and allowing for long ranged attacks.
In the West, the United States provides a different "tone of voice" when cyberwarfare is on the tip of everyone's tongue. The United States provides security plans strictly in the response to cyberwarfare, basically going on the defensive when they are being attacked by devious cyber methods. In the U.S., the responsibility of cybersecurity is divided between the Department of Homeland Security, the Federal Bureau of Investigation, and the Department of Defense. In recent years, a new department was created to specifically tend to cyber threats, this department is known as Cyber Command. Cyber Command is a military subcommand under US Strategic Command and is responsible for dealing with threats to the military cyber infrastructure. Cyber Command's service elements include Army Forces Cyber Command, the Twenty-fourth Air Force, Fleet Cyber Command and Marine Forces Cyber Command. [231] It ensures that the President can navigate and control information systems and that he also has military options available when defense of the nation needs to be enacted in cyberspace. Individuals at Cyber Command must pay attention to state and non-state actors who are developing cyberwarfare capabilities in conducting cyber espionage and other cyber-attacks against the nation and its allies. Cyber Command seeks to be a deterrence factor to dissuade potential adversaries from attacking the U.S., while being a multi-faceted department in conducting cyber operations of its own.
Three prominent events took place which may have been catalysts in the creation of the idea of Cyber Command. There was a failure of critical infrastructure reported by the CIA where malicious activities against information technology systems disrupted electrical power capabilities overseas. This resulted in multi-city power outages across multiple regions. The second event was the exploitation of global financial services. In November 2008, an international bank had a compromised payment processor that allowed fraudulent transactions to be made at more than 130 automated teller machines in 49 cities within a 30-minute period. [232] The last event was the systemic loss of U.S. economic value when an industry in 2008 estimated $1 trillion in losses of intellectual property to data theft. Even though all these events were internal catastrophes, they were very real in nature, meaning nothing can stop state or non-state actors to do the same thing on an even grander scale. Other initiatives like the Cyber Training Advisory Council were created to improve the quality, efficiency, and sufficiency of training for computer network defense, attack, and exploitation of enemy cyber operations.
On both ends of the spectrum, East and West nations show a "sword and shield" contrast in ideals. The Chinese have a more offensive minded idea for cyberwarfare, trying to get the pre-emptive strike in the early stages of conflict to gain the upper-hand. In the U.S. there are more reactionary measures being taken at creating systems with impenetrable barriers to protect the nation and its civilians from cyber-attacks.
According to Homeland Preparedness News, many mid-sized U.S. companies have a difficult time defending their systems against cyber attacks. Around 80 percent of assets vulnerable to a cyber attack are owned by private companies and organizations. Former New York State Deputy Secretary for Public Safety Michael Balboni said that private entities "do not have the type of capability, bandwidth, interest or experience to develop a proactive cyber analysis." [233]
In response to cyber-attacks on April 1, 2015, President Obama issued an Executive Order establishing the first-ever economic sanctions. The Executive Order will impact individuals and entities ("designees") responsible for cyber-attacks that threaten the national security, foreign policy, economic health, or financial stability of the US. Specifically, the Executive Order authorizes the Treasury Department to freeze designees' assets. [234]
A series of powerful cyber attacks began 27 June 2017 that swamped websites of Ukrainian organizations, including banks, ministries, newspapers and electricity firms.
A whole industry is working trying to minimize the likelihood and the consequence of an information attack.
For a partial list see: Computer security software companies.
They offer different products and services, aimed at:
Many organization are trying to classify vulnerability and their consequence: the most famous vulnerability database is the Common Vulnerabilities and Exposures.
Computer emergency response teams are set up by government and large organization to handle computer security incidents.
Once a cyber-attack has been initiated, there are certain targets that need to be attacked to cripple the opponent. Certain infrastructures as targets have been highlighted as critical infrastructures in time of conflict that can severely cripple a nation. Control systems, energy resources, finance, telecommunications, transportation, and water facilities are seen as critical infrastructure targets during conflict. A new report on the industrial cybersecurity problems, produced by the British Columbia Institute of Technology, and the PA Consulting Group, using data from as far back as 1981, reportedly[ weasel words] has found a 10-fold increase in the number of successful cyber-attacks on infrastructure Supervisory Control and Data Acquisition (SCADA) systems since 2000. [219] Cyberattacks that have an adverse physical effect are known as cyber-physical attacks. [235]
Control systems are responsible for activating and monitoring industrial or mechanical controls. Many devices are integrated with computer platforms to control valves and gates to certain physical infrastructures. Control systems are usually designed as remote telemetry devices that link to other physical devices through internet access or modems. Little security can be offered when dealing with these devices, enabling many hackers or cyberterrorists to seek out systematic vulnerabilities. Paul Blomgren, manager of sales engineering at cybersecurity firm explained how his people drove to a remote substation, saw a wireless network antenna and immediately plugged in their wireless LAN cards. They took out their laptops and connected to the system because it wasn't using passwords. "Within 10 minutes, they had mapped every piece of equipment in the facility," Blomgren said. "Within 15 minutes, they mapped every piece of equipment in the operational control network. Within 20 minutes, they were talking to the business network and had pulled off several business reports. They never even left the vehicle." [236]
Energy is seen as the second infrastructure that could be attacked. It is broken down into two categories, electricity and natural gas. Electricity also known as electric grids power cities, regions, and households; it powers machines and other mechanisms used in day-to-day life. Using U.S. as an example, in a conflict cyberterrorists can access data through the Daily Report of System Status that shows power flows throughout the system and can pinpoint the busiest sections of the grid. By shutting those grids down, they can cause mass hysteria, backlog, and confusion; also being able to locate critical areas of operation to further attacks in a more direct method. Cyberterrorists can access instructions on how to connect to the Bonneville Power Administration which helps direct them on how to not fault the system in the process. This is a major advantage that can be utilized when cyber-attacks are being made because foreign attackers with no prior knowledge of the system can attack with the highest accuracy without drawbacks. Cyberattacks on natural gas installations go much the same way as it would with attacks on electrical grids. Cyberterrorists can shutdown these installations stopping the flow or they can even reroute gas flows to another section that can be occupied by one of their allies. There was a case in Russia with a gas supplier known as Gazprom, they lost control of their central switchboard which routes gas flow, after an inside operator and Trojan horse program bypassed security. [236]
Financial infrastructures could be hit hard by cyber-attacks as the financial system is linked by computer systems. [237] is constant money being exchanged in these institutions and if cyberterrorists were to attack and if transactions were rerouted and large amounts of money stolen, financial industries would collapse and civilians would be without jobs and security. Operations would stall from region to region causing nationwide economical degradation. In the U.S. alone, the average daily volume of transactions hit $3 trillion and 99% of it is non-cash flow. [236] To be able to disrupt that amount of money for one day or for a period of days can cause lasting damage making investors pull out of funding and erode public confidence.
A cyberattack on a financial institution or transactions may be referred to as a cyberheist. These attacks may start with phishing that targets employees, using social engineering to coax information from them. They may allow attackers to hack into the network and put keyloggers on the accounting systems. In time, the cybercriminals are able to obtain password and keys information. An organization's bank accounts can then be accessed via the information they have stolen using the keyloggers. [238] In May 2013, a gang carried out a US$40 million cyberheist from the Bank of Muscat. [239]
Cyberattacking telecommunication infrastructures have straightforward results. Telecommunication integration is becoming common practice, systems such as voice and IP networks are merging. Everything is being run through the internet because the speeds and storage capabilities are endless. Denial-of-service attacks can be administered as previously mentioned, but more complex attacks can be made on BGP routing protocols or DNS infrastructures. It is less likely that an attack would target or compromise the traditional telephony network of SS7 switches, or an attempted attack on physical devices such as microwave stations or satellite facilities. The ability would still be there to shut down those physical facilities to disrupt telephony networks. The whole idea on these cyber-attacks is to cut people off from one another, to disrupt communication, and by doing so, to impede critical information being sent and received. In cyberwarfare, this is a critical way of gaining the upper-hand in a conflict. By controlling the flow of information and communication, a nation can plan more accurate strikes and enact better counter-attack measures on their enemies.
Transportation infrastructure mirrors telecommunication facilities; by impeding transportation for individuals in a city or region, the economy will slightly degrade over time. Successful cyber-attacks can impact scheduling and accessibility, creating a disruption in the economic chain. Carrying methods will be impacted, making it hard for cargo to be sent from one place to another. In January 2003 during the "slammer" virus, Continental Airlines was forced to shut down flights due to computer problems. [236] Cyberterrorists can target railroads by disrupting switches, target flight software to impede airplanes, and target road usage to impede more conventional transportation methods. In May 2015, a man, Chris Roberts, who was a cyberconsultant, revealed to the FBI that he had repeatedly, from 2011 to 2014, managed to hack into Boeing and Airbus flights' controls via the onboard entertainment system, allegedly, and had at least once ordered a flight to climb. The FBI, after detaining him in April 2015 in Syracuse, had interviewed him about the allegations. [240]
Water as an infrastructure could be one of the most critical infrastructures to be attacked. It is seen as one of the greatest security hazards among all of the computer-controlled systems. There is the potential to have massive amounts of water unleashed into an area which could be unprotected causing loss of life and property damage. It is not even water supplies that could be attacked; sewer systems can be compromised too. There was no calculation given to the cost of damages, but the estimated cost to replace critical water systems could be in the hundreds of billions of dollars. [236] Most of these water infrastructures are well developed making it hard for cyber-attacks to cause any significant damage, at most, equipment failure can occur causing power outlets to be disrupted for a short time.
Malware (short for malicious software) is any software intentionally designed to cause damage to a computer, server or computer network. [241] Malware does the damage after it is implanted or introduced in some way into a target’s computer and can take the form of executable code, scripts, active content, and other software. [242] The code is described as computer viruses, worms, Trojan horses, ransomware, spyware, adware, scareware, besides other terms. Malware has a malicious intent, acting against the interest of the computer user — and so does not include software that causes unintentional harm due to some deficiency, usually described as a software bug.
Programs officially supplied by companies can be considered malware if they secretly act against the interests of the computer user. For example, Sony sold the Sony rootkit, which had a Trojan horse embedded into CDs, and which silently installed and concealed itself on purchasers' computers with the intention of preventing illicit copying; it also reported on users' listening habits, and unintentionally created vulnerabilities that were exploited by unrelated malware. [243]
Protection against malware involves the prevention of malware software gaining access to the target's computer, and for this purpose antivirus software, firewalls and other strategies can be used to try to protect against the introduction of malware, to check for the presence of malware and malicious activity, and to recover from attacks. [244]
Many early infectious programs, including the first Internet Worm, were written as experiments or pranks. Today, malware is used by both black hat hackers and governments, to steal personal, financial, or business information. [245] [246]
Malware is sometimes used broadly against government or corporate websites to gather guarded information, [247] or to disrupt their operation in general. However, malware can be used against individuals to gain information such as personal identification numbers or details, bank or credit card numbers, and passwords.
Since the rise of widespread broadband Internet access, malicious software has more frequently been designed for profit. Since 2003, the majority of widespread viruses and worms have been designed to take control of users' computers for illicit purposes. [248] Infected " zombie computers" can be used to send email spam, to host contraband data such as child pornography, [249] or to engage in distributed denial-of-service attacks as a form of extortion. [250]
Programs designed to monitor users' web browsing, display unsolicited advertisements, or redirect affiliate marketing revenues are called spyware. Spyware programs do not spread like viruses; instead they are generally installed by exploiting security holes. They can also be hidden and packaged together with unrelated user-installed software. [251]
Ransomware affects an infected computer system in some way, and demands payment to bring it back to its normal state. For example, programs such as CryptoLocker encrypt files securely, and only decrypt them on payment of a substantial sum of money. [252]
Some malware is used to generate money by click fraud, making it appear that the computer user has clicked an advertising link on a site, generating a payment from the advertiser. It was estimated in 2012 that about 60 to 70% of all active malware used some kind of click fraud, and 22% of all ad-clicks were fraudulent. [253]
In addition to criminal money-making, malware can be used for sabotage, often for political motives. Stuxnet, for example, was designed to disrupt very specific industrial equipment. There have been politically motivated attacks that have spread over and shut down large computer networks, including massive deletion of files and corruption of master boot records, described as "computer killing". Such attacks were made on Sony Pictures Entertainment (25 November 2014, using malware known as Shamoon or W32.Disttrack) and Saudi Aramco (August 2012). [254] [255]
The best-known types of malware, viruses and worms, are known for the manner in which they spread, rather than any specific types of behavior. A computer virus is software that embeds itself in some other executable software (including the operating system itself) on the target system without the user's consent and when it is run, the virus is spread to other executables. On the other hand, a worm is a stand-alone malware software that actively transmits itself over a network to infect other computers. These definitions lead to the observation that a virus requires the user to run an infected software or operating system for the virus to spread, whereas a worm spreads itself. [256]
These categories are not mutually exclusive, so malware may use multiple techniques. [257] This section only applies to malware designed to operate undetected, not sabotage and ransomware.
A computer virus is software usually hidden within another seemingly innocuous program that can produce copies of itself and insert them into other programs or files, and that usually performs a harmful action (such as destroying data). [258] An example of this is a PE infection, a technique, usually used to spread malware, that inserts extra data or executable code into PE files. [259]
Lock-screens, or screen lockers is a type of “cyber police” ransomware that blocks screens on Windows or Android devices with a false accusation in harvesting illegal content, trying to scare the victims into paying up a fee. [260] Jisut and SLocker impact Android devices more than other lock-screens, with Jisut making up nearly 60 percent of all Android ransomware detections. [261]
A Trojan horse is a harmful program that misrepresents itself to masquerade as a regular, benign program or utility in order to persuade a victim to install it. A Trojan horse usually carries a hidden destructive function that is activated when the application is started. The term is derived from the Ancient Greek story of the Trojan horse used to invade the city of Troy by stealth. [262] [263] [264] [265] [266]
Trojan horses are generally spread by some form of social engineering, for example, where a user is duped into executing an e-mail attachment disguised to be unsuspicious, (e.g., a routine form to be filled in), or by drive-by download. Although their payload can be anything, many modern forms act as a backdoor, contacting a controller which can then have unauthorized access to the affected computer. [267] While Trojan horses and backdoors are not easily detectable by themselves, computers may appear to run slower due to heavy processor or network usage.
Unlike computer viruses and worms, Trojan horses generally do not attempt to inject themselves into other files or otherwise propagate themselves. [268]
In spring 2017 Mac users were hit by the new version of Proton Remote Access Trojan (RAT) [269] trained to extract password data from various sources, such as browser auto-fill data, the Mac-OS keychain, and password vaults. [270]
Once malicious software is installed on a system, it is essential that it stays concealed, to avoid detection. Software packages known as rootkits allow this concealment, by modifying the host's operating system so that the malware is hidden from the user. Rootkits can prevent a harmful process from being visible in the system's list of processes, or keep its files from being read. [271]
Some types of harmful software contain routines to evade identification and/or removal attempts, not merely to hide themselves. An early example of this behavior is recorded in the Jargon File tale of a pair of programs infesting a Xerox CP-V time sharing system:
A backdoor is a method of bypassing normal authentication procedures, usually over a connection to a network such as the Internet. Once a system has been compromised, one or more backdoors may be installed in order to allow access in the future, [273] invisibly to the user.
The idea has often been suggested that computer manufacturers preinstall backdoors on their systems to provide technical support for customers, but this has never been reliably verified. It was reported in 2014 that US government agencies had been diverting computers purchased by those considered "targets" to secret workshops where software or hardware permitting remote access by the agency was installed, considered to be among the most productive operations to obtain access to networks around the world. [274] Backdoors may be installed by Trojan horses, worms, implants, or other methods. [275] [276]
Since the beginning of 2015, a sizable portion of malware utilizes a combination of many techniques designed to avoid detection and analysis. [277]
Nowadays, one of the most sophisticated and stealthy ways of evasion is to use information hiding techniques, namely stegomalware.
Malware exploits security defects ( security bugs or vulnerabilities) in the design of the operating system, in applications (such as browsers, e.g. older versions of Microsoft Internet Explorer supported by Windows XP [282]), or in vulnerable versions of browser plugins such as Adobe Flash Player, Adobe Acrobat or Reader, or Java SE. [283] [284] Sometimes even installing new versions of such plugins does not automatically uninstall old versions. Security advisories from plug-in providers announce security-related updates. [285] Common vulnerabilities are assigned CVE IDs and listed in the US National Vulnerability Database. Secunia PSI [286] is an example of software, free for personal use, that will check a PC for vulnerable out-of-date software, and attempt to update it.
Malware authors target bugs, or loopholes, to exploit. A common method is exploitation of a buffer overrun vulnerability, where software designed to store data in a specified region of memory does not prevent more data than the buffer can accommodate being supplied. Malware may provide data that overflows the buffer, with malicious executable code or data after the end; when this payload is accessed it does what the attacker, not the legitimate software, determines.
Early PCs had to be booted from floppy disks. When built-in hard drives became common, the operating system was normally started from them, but it was possible to boot from another boot device if available, such as a floppy disk, CD-ROM, DVD-ROM, USB flash drive or network. It was common to configure the computer to boot from one of these devices when available. Normally none would be available; the user would intentionally insert, say, a CD into the optical drive to boot the computer in some special way, for example, to install an operating system. Even without booting, computers can be configured to execute software on some media as soon as they become available, e.g. to autorun a CD or USB device when inserted.
Malware distributors would trick the user into booting or running from an infected device or medium. For example, a virus could make an infected computer add autorunnable code to any USB stick plugged into it. Anyone who then attached the stick to another computer set to autorun from USB would in turn become infected, and also pass on the infection in the same way. [287] More generally, any device that plugs into a USB port - even lights, fans, speakers, toys, or peripherals such as a digital microscope - can be used to spread malware. Devices can be infected during manufacturing or supply if quality control is inadequate. [287]
This form of infection can largely be avoided by setting up computers by default to boot from the internal hard drive, if available, and not to autorun from devices. [287] Intentional booting from another device is always possible by pressing certain keys during boot.
Older email software would automatically open HTML email containing potentially malicious JavaScript code. Users may also execute disguised malicious email attachments and infected executable files supplied in other ways.[ citation needed]
In computing, privilege refers to how much a user or program is allowed to modify a system. In poorly designed computer systems, both users and programs can be assigned more privileges than they should be, and malware can take advantage of this. The two ways that malware does this is through overprivileged users and overprivileged code.
Some systems allow all users to modify their internal structures, and such users today would be considered over-privileged users. This was the standard operating procedure for early microcomputer and home computer systems, where there was no distinction between an administrator or root, and a regular user of the system. In some systems, non-administrator users are over-privileged by design, in the sense that they are allowed to modify internal structures of the system. In some environments, users are over-privileged because they have been inappropriately granted administrator or equivalent status.
Some systems allow code executed by a user to access all rights of that user, which is known as over-privileged code. This was also standard operating procedure for early microcomputer and home computer systems. Malware, running as over-privileged code, can use this privilege to subvert the system. Almost all currently popular operating systems, and also many scripting applications allow code too many privileges, usually in the sense that when a user executes code, the system allows that code all rights of that user. This makes users vulnerable to malware in the form of e-mail attachments, which may or may not be disguised.
As malware attacks become more frequent, attention has begun to shift from viruses and spyware protection, to malware protection, and programs that have been specifically developed to combat malware. (Other preventive and recovery measures, such as backup and recovery methods, are mentioned in the computer virus article).
A specific component of anti-virus and anti-malware software, commonly referred to as an on-access or real-time scanner, hooks deep into the operating system's core or kernel and functions in a manner similar to how certain malware itself would attempt to operate, though with the user's informed permission for protecting the system. Any time the operating system accesses a file, the on-access scanner checks if the file is a 'legitimate' file or not. If the file is identified as malware by the scanner, the access operation will be stopped, the file will be dealt with by the scanner in a pre-defined way (how the anti-virus program was configured during/post installation), and the user will be notified.[ citation needed] This may have a considerable performance impact on the operating system, though the degree of impact is dependent on how well the scanner was programmed. The goal is to stop any operations the malware may attempt on the system before they occur, including activities which might exploit bugs or trigger unexpected operating system behavior.
Anti-malware programs can combat malware in two ways:
Real-time protection from malware works identically to real-time antivirus protection: the software scans disk files at download time, and blocks the activity of components known to represent malware. In some cases, it may also intercept attempts to install start-up items or to modify browser settings. Because many malware components are installed as a result of browser exploits or user error, using security software (some of which are anti-malware, though many are not) to "sandbox" browsers (essentially isolate the browser from the computer and hence any malware induced change) can also be effective in helping to restrict any damage done.[ citation needed]
Examples of Microsoft Windows antivirus and anti-malware software include the optional Microsoft Security Essentials [290] (for Windows XP, Vista, and Windows 7) for real-time protection, the Windows Malicious Software Removal Tool [291] (now included with Windows (Security) Updates on " Patch Tuesday", the second Tuesday of each month), and Windows Defender (an optional download in the case of Windows XP, incorporating MSE functionality in the case of Windows 8 and later). [292] Additionally, several capable antivirus software programs are available for free download from the Internet (usually restricted to non-commercial use). [293] Tests found some free programs to be competitive with commercial ones. [293] Microsoft's System File Checker can be used to check for and repair corrupted system files.
Some viruses disable System Restore and other important Windows tools such as Task Manager and Command Prompt. Many such viruses can be removed by rebooting the computer, entering Windows safe mode with networking [294], and then using system tools or Microsoft Safety Scanner. [295]
Hardware implants can be of any type, so there can be no general way to detect them.
As malware also harms the compromised websites (by breaking reputation, blacklisting in search engines, etc.), some websites offer vulnerability scanning. [296] [297] [298] [299] Such scans check the website, detect malware, may note outdated software, and may report known security issues.
As a last resort, computers can be protected from malware, and infected computers can be prevented from disseminating trusted information, by imposing an "air gap" (i.e. completely disconnecting them from all other networks). However, malware can still cross the air gap in some situations. For example, removable media can carry malware across the gap. In December 2013 researchers in Germany showed one way that an apparent air gap can be defeated. [300]
"AirHopper", [301] "BitWhisper", [302] "GSMem" [303] and "Fansmitter" [304] are four techniques introduced by researchers that can leak data from air-gapped computers using electromagnetic, thermal and acoustic emissions.
Grayware is a term applied to unwanted applications or files that are not classified as malware, but can worsen the performance of computers and may cause security risks. [305]
It describes applications that behave in an annoying or undesirable manner, and yet are less serious or troublesome than malware. Grayware encompasses spyware, adware, fraudulent dialers, joke programs, remote access tools and other unwanted programs that may harm the performance of computers or cause inconvenience. The term came into use around 2004. [306]
Another term, potentially unwanted program (PUP) or potentially unwanted application (PUA), [307] refers to applications that would be considered unwanted despite often having been downloaded by the user, possibly after failing to read a download agreement. PUPs include spyware, adware, and fraudulent dialers. Many security products classify unauthorised key generators as grayware, although they frequently carry true malware in addition to their ostensible purpose.
Software maker Malwarebytes lists several criteria for classifying a program as a PUP. [308] Some types of adware (using stolen certificates) turn off anti-malware and virus protection; technical remedies are available. [281]
Before Internet access became widespread, viruses spread on personal computers by infecting the executable boot sectors of floppy disks. By inserting a copy of itself into the machine code instructions in these executables, a virus causes itself to be run whenever a program is run or the disk is booted. Early computer viruses were written for the Apple II and Macintosh, but they became more widespread with the dominance of the IBM PC and MS-DOS system. Executable-infecting viruses are dependent on users exchanging software or boot-able floppies and thumb drives so they spread rapidly in computer hobbyist circles. [309]
The first worms, network-borne infectious programs, originated not on personal computers, but on multitasking Unix systems. The first well-known worm was the Internet Worm of 1988, which infected SunOS and VAX BSD systems. Unlike a virus, this worm did not insert itself into other programs. Instead, it exploited security holes ( vulnerabilities) in network server programs and started itself running as a separate process. [310] This same behavior is used by today's worms as well. [311] [312]
With the rise of the Microsoft Windows platform in the 1990s, and the flexible macros of its applications, it became possible to write infectious code in the macro language of Microsoft Word and similar programs. These macro viruses infect documents and templates rather than applications ( executables), but rely on the fact that macros in a Word document are a form of executable code. [313]
The notion of a self-reproducing computer program can be traced back to initial theories about the operation of complex automata. [314] John von Neumann showed that in theory a program could reproduce itself. This constituted a plausibility result in computability theory. Fred Cohen experimented with computer viruses and confirmed Neumann's postulate and investigated other properties of malware such as detectability and self-obfuscation using rudimentary encryption. His doctoral dissertation was on the subject of computer viruses. [315] The combination of cryptographic technology as part of the payload of the virus, exploiting it for attack purposes was initialized and investigated from the mid 1990s, and includes initial ransomware and evasion ideas. [316]
An advanced persistent threat is a set of stealthy and continuous computer hacking processes, often orchestrated by a person or persons targeting a specific entity. An APT usually targets either private organizations, states or both for business or political motives. APT processes require a high degree of covertness over a long period of time. The "advanced" process signifies sophisticated techniques using malware to exploit vulnerabilities in systems. The "persistent" process suggests that an external command and control system is continuously monitoring and extracting data from a specific target. The "threat" process indicates human involvement in orchestrating the attack. [317]
APT usually refers to a group, such as a government, with both the capability and the intent to target, persistently and effectively, a specific entity. The term is commonly used to refer to cyber threats, in particular that of Internet-enabled espionage using a variety of intelligence gathering techniques to access sensitive information, [318] but applies equally to other threats such as that of traditional espionage or attacks. [319] Other recognized attack vectors include infected media, supply chain compromise, and social engineering. The purpose of these attacks is to place custom malicious code on one or multiple computers for specific tasks and to remain undetected for the longest possible period. Knowing the attacker artifacts, such as file names, can help a professional make a network-wide search to gather all affected systems. [320] Individuals, such as an individual hacker, are not usually referred to as an APT, as they rarely have the resources to be both advanced and persistent even if they are intent on gaining access to, or attacking, a specific target. [321]
First warnings against targeted, socially-engineered emails dropping trojans to exfiltrate sensitive information were published by UK and US CERT organisations in 2005, although the name "APT" was not used. [322] The term "advanced persistent threat" is widely cited as originating from the United States Air Force in 2006 [323] with Colonel Greg Rattray frequently cited as the individual who coined the term. [324]
The Stuxnet computer worm, which targeted the computer hardware of Iran's nuclear program, is one example. In this case, the Iranian government might consider the Stuxnet creators to be an advanced persistent threat.
Within the computer security community, and increasingly within the media, the term is almost always used in reference to a long-term pattern of sophisticated hacking attacks aimed at governments, companies, and political activists, and by extension, also to refer to the groups behind these attacks. [325] Advanced persistent threat (APT) as a term may be shifting focus to computer based hacking due to the rising number of occurrences. PC World reported an 81 percent increase from 2010 to 2011 of particularly advanced targeted computer hacking attacks. [326]
A common misconception[ who?] associated with the APT is that the APT only targets Western governments. While examples of technological APTs against Western governments may be more publicized in the West, actors in many nations have used cyberspace as a means to gather intelligence on individuals and groups of individuals of interest. [327] [328] [329] The United States Cyber Command is tasked with coordinating the US military's response to this cyber threat.
Numerous sources have alleged that some APT groups are affiliated with, or are agents of, nation-states. [330] [331] [332] Businesses holding a large quantity of personally identifiable information are at high risk of being targeted by advanced persistent threats, including: [318]
Bodmer, Kilger, Carpenter and Jones defined the following APT criteria: [334]
Actors behind advanced persistent threats create a growing and changing risk to organizations' financial assets, intellectual property, and reputation [335] by following a continuous process or kill chain:
The global landscape of APTs from all sources is sometimes referred to in the singular as "the" APT, as are references to the actor behind a specific incident or series of incidents.[ citation needed]
In 2013, Mandiant presented results of their research on alleged Chinese attacks using APT methodology between 2004 and 2013 [336] that followed similar lifecycle:
In incidents analysed by Mandiant, the average period over which the attackers controlled the victim's network was one year, with longest – almost five years. [336] The infiltrations were allegedly performed by Shanghai-based Unit 61398 of People's Liberation Army. Chinese officials have denied any involvement in these attacks. [337]
Definitions of precisely what an APT is can vary, but can be summarized by their named requirements below: [319] [321] [338]
There are hundreds of millions[ citation needed] of malware variations, which makes it extremely challenging to protect organizations from APT. While APT activities are stealthy and hard to detect, the command and control network traffic associated with APT can be detected at the network layer level. Deep log analyses and log correlation from various sources can be useful in detecting APT activities. Agents can be used to collect logs (TCP and UDP) directly from assets into a syslog server. Then a Security Information and Event Management (SIEM) tool can correlate and analyze logs. While it is challenging to separate noises from legitimate traffic, a good log correlation tool can be used to filter out the legitimate traffic, so security staff can focus on the noises. [317] A good asset management with documented components of the original Operation System plus software will help IT security analysts detect new files on the system.
A computer network, or data network, is a digital telecommunications network which allows nodes to share resources. In computer networks, computing devices exchange data with each other using connections between nodes ( data links.) These data links are established over cable media such as wires or optic cables, or wireless media such as WiFi.
Network computer devices that originate, route and terminate the data are called network nodes. [339] Nodes can include hosts such as personal computers, phones, servers as well as networking hardware. Two such devices can be said to be networked together when one device is able to exchange information with the other device, whether or not they have a direct connection to each other. In most cases, application-specific communications protocols are layered (i.e. carried as payload) over other more general communications protocols. This formidable collection of information technology requires skilled network management to keep it all running reliably.
Computer networks support an enormous number of applications and services such as access to the World Wide Web, digital video, digital audio, shared use of application and storage servers, printers, and fax machines, and use of email and instant messaging applications as well as many others. Computer networks differ in the transmission medium used to carry their signals, communications protocols to organize network traffic, the network's size, topology, traffic control mechanism and organizational intent. The best-known computer network is the Internet.
The chronology of significant computer-network developments includes:
Computer networking may be considered a branch of electrical engineering, electronics engineering, telecommunications, computer science, information technology or computer engineering, since it relies upon the theoretical and practical application of the related disciplines.
A computer network facilitates interpersonal communications allowing users to communicate efficiently and easily via various means: email, instant messaging, online chat, telephone, video telephone calls, and video conferencing. A network allows sharing of network and computing resources. Users may access and use resources provided by devices on the network, such as printing a document on a shared network printer or use of a shared storage device. A network allows sharing of files, data, and other types of information giving authorized users the ability to access information stored on other computers on the network. Distributed computing uses computing resources across a network to accomplish tasks.
A computer network may be used by security hackers to deploy computer viruses or computer worms on devices connected to the network, or to prevent these devices from accessing the network via a denial-of-service attack.
Computer communication links that do not support packets, such as traditional point-to-point telecommunication links, simply transmit data as a bit stream. However, most information in computer networks is carried in packets. A network packet is a formatted unit of data (a list of bits or bytes, usually a few tens of bytes to a few kilobytes long) carried by a packet-switched network. Packets are sent through the network to their destination. Once the packets arrive they are reassembled into their original message.
Packets consist of two kinds of data: control information, and user data (payload). The control information provides data the network needs to deliver the user data, for example: source and destination network addresses, error detection codes, and sequencing information. Typically, control information is found in packet headers and trailers, with payload data in between.
With packets, the bandwidth of the transmission medium can be better shared among users than if the network were circuit switched. When one user is not sending packets, the link can be filled with packets from other users, and so the cost can be shared, with relatively little interference, provided the link isn't overused. Often the route a packet needs to take through a network is not immediately available. In that case the packet is queued and waits until a link is free.
The physical layout of a network is usually less important than the topology that connects network nodes. Most diagrams that describe a physical network are therefore topological, rather than geographic. The symbols on these diagrams usually denote network links and network nodes.
The transmission media (often referred to in the literature as the physical media) used to link devices to form a computer network include electrical cable, optical fiber, and radio waves. In the OSI model, these are defined at layers 1 and 2 — the physical layer and the data link layer.
A widely adopted family of transmission media used in local area network (LAN) technology is collectively known as Ethernet. The media and protocol standards that enable communication between networked devices over Ethernet are defined by IEEE 802.3. Ethernet transmits data over both copper and fiber cables. Wireless LAN standards use radio waves, others use infrared signals as a transmission medium. Power line communication uses a building's power cabling to transmit data.
The orders of the following wired technologies are, roughly, from slowest to fastest transmission speed.
Price is a main factor distinguishing wired- and wireless-technology options in a business. Wireless options command a price premium that can make purchasing wired computers, printers and other devices a financial benefit. Before making the decision to purchase hard-wired technology products, a review of the restrictions and limitations of the selections is necessary. Business and employee needs may override any cost considerations. [352]
There have been various attempts at transporting data over exotic media:
Both cases have a large round-trip delay time, which gives slow two-way communication, but doesn't prevent sending large amounts of information.
Apart from any physical transmission media there may be, networks comprise additional basic system building blocks, such as network interface controllers (NICs), repeaters, hubs, bridges, switches, routers, modems, and firewalls. Any particular piece of equipment will frequently contain multiple building blocks and perform multiple functions.
A network interface controller (NIC) is computer hardware that provides a computer with the ability to access the transmission media, and has the ability to process low-level network information. For example, the NIC may have a connector for accepting a cable, or an aerial for wireless transmission and reception, and the associated circuitry.
The NIC responds to traffic addressed to a network address for either the NIC or the computer as a whole.
In Ethernet networks, each network interface controller has a unique Media Access Control (MAC) address—usually stored in the controller's permanent memory. To avoid address conflicts between network devices, the Institute of Electrical and Electronics Engineers (IEEE) maintains and administers MAC address uniqueness. The size of an Ethernet MAC address is six octets. The three most significant octets are reserved to identify NIC manufacturers. These manufacturers, using only their assigned prefixes, uniquely assign the three least-significant octets of every Ethernet interface they produce.
A repeater is an electronic device that receives a network signal, cleans it of unnecessary noise and regenerates it. The signal is retransmitted at a higher power level, or to the other side of an obstruction, so that the signal can cover longer distances without degradation. In most twisted pair Ethernet configurations, repeaters are required for cable that runs longer than 100 meters. With fiber optics, repeaters can be tens or even hundreds of kilometers apart.
A repeater with multiple ports is known as an Ethernet hub. Repeaters work on the physical layer of the OSI model. Repeaters require a small amount of time to regenerate the signal. This can cause a propagation delay that affects network performance and may affect proper function. As a result, many network architectures limit the number of repeaters that can be used in a row, e.g., the Ethernet 5-4-3 rule.
Hubs and repeaters in LANs have been mostly obsoleted by modern switches.
A network bridge connects and filters traffic between two network segments at the data link layer (layer 2) of the OSI model to form a single network. This breaks the network's collision domain but maintains a unified broadcast domain. Network segmentation breaks down a large, congested network into an aggregation of smaller, more efficient networks.
Bridges come in three basic types:
A network switch is a device that forwards and filters OSI layer 2 datagrams ( frames) between ports based on the destination MAC address in each frame. [355] A switch is distinct from a hub in that it only forwards the frames to the physical ports involved in the communication rather than all ports connected. It can be thought of as a multi-port bridge. [356] It learns to associate physical ports to MAC addresses by examining the source addresses of received frames. If an unknown destination is targeted, the switch broadcasts to all ports but the source. Switches normally have numerous ports, facilitating a star topology for devices, and cascading additional switches.
A router is an internetworking device that forwards packets between networks by processing the routing information included in the packet or datagram (Internet protocol information from layer 3). The routing information is often processed in conjunction with the routing table (or forwarding table). A router uses its routing table to determine where to forward packets. A destination in a routing table can include a "null" interface, also known as the "black hole" interface because data can go into it, however, no further processing is done for said data, i.e. the packets are dropped.
Modems (MOdulator-DEModulator) are used to connect network nodes via wire not originally designed for digital network traffic, or for wireless. To do this one or more carrier signals are modulated by the digital signal to produce an analog signal that can be tailored to give the required properties for transmission. Modems are commonly used for telephone lines, using a Digital Subscriber Line technology.
A firewall is a network device for controlling network security and access rules. Firewalls are typically configured to reject access requests from unrecognized sources while allowing actions from recognized ones. The vital role firewalls play in network security grows in parallel with the constant increase in cyber attacks.
Network topology is the layout or organizational hierarchy of interconnected nodes of a computer network. Different network topologies can affect throughput, but reliability is often more critical. With many technologies, such as bus networks, a single failure can cause the network to fail entirely. In general the more interconnections there are, the more robust the network is; but the more expensive it is to install.
Common layouts are:
Note that the physical layout of the nodes in a network may not necessarily reflect the network topology. As an example, with FDDI, the network topology is a ring (actually two counter-rotating rings), but the physical topology is often a star, because all neighboring connections can be routed via a central physical location.
An overlay network is a virtual computer network that is built on top of another network. Nodes in the overlay network are connected by virtual or logical links. Each link corresponds to a path, perhaps through many physical links, in the underlying network. The topology of the overlay network may (and often does) differ from that of the underlying one. For example, many peer-to-peer networks are overlay networks. They are organized as nodes of a virtual system of links that run on top of the Internet. [357]
Overlay networks have been around since the invention of networking when computer systems were connected over telephone lines using modems, before any data network existed.
The most striking example of an overlay network is the Internet itself. The Internet itself was initially built as an overlay on the telephone network. [357] Even today, each Internet node can communicate with virtually any other through an underlying mesh of sub-networks of wildly different topologies and technologies. Address resolution and routing are the means that allow mapping of a fully connected IP overlay network to its underlying network.
Another example of an overlay network is a distributed hash table, which maps keys to nodes in the network. In this case, the underlying network is an IP network, and the overlay network is a table (actually a map) indexed by keys.
Overlay networks have also been proposed as a way to improve Internet routing, such as through quality of service guarantees to achieve higher-quality streaming media. Previous proposals such as IntServ, DiffServ, and IP Multicast have not seen wide acceptance largely because they require modification of all routers in the network.[ citation needed] On the other hand, an overlay network can be incrementally deployed on end-hosts running the overlay protocol software, without cooperation from Internet service providers. The overlay network has no control over how packets are routed in the underlying network between two overlay nodes, but it can control, for example, the sequence of overlay nodes that a message traverses before it reaches its destination.
For example, Akamai Technologies manages an overlay network that provides reliable, efficient content delivery (a kind of multicast). Academic research includes end system multicast, [358] resilient routing and quality of service studies, among others.
A communication protocol is a set of rules for exchanging information over a network. In a protocol stack (also see the OSI model), each protocol leverages the services of the protocol layer below it, until the lowest layer controls the hardware which sends information across the media. The use of protocol layering is today ubiquitous across the field of computer networking. An important example of a protocol stack is HTTP (the World Wide Web protocol) running over TCP over IP (the Internet protocols) over IEEE 802.11 (the Wi-Fi protocol). This stack is used between the wireless router and the home user's personal computer when the user is surfing the web.
Communication protocols have various characteristics. They may be connection-oriented or connectionless, they may use circuit mode or packet switching, and they may use hierarchical addressing or flat addressing.
There are many communication protocols, a few of which are described below.
IEEE 802 is a family of IEEE standards dealing with local area networks and metropolitan area networks. The complete IEEE 802 protocol suite provides a diverse set of networking capabilities. The protocols have a flat addressing scheme. They operate mostly at levels 1 and 2 of the OSI model.
For example, MAC bridging ( IEEE 802.1D) deals with the routing of Ethernet packets using a Spanning Tree Protocol. IEEE 802.1Q describes VLANs, and IEEE 802.1X defines a port-based Network Access Control protocol, which forms the basis for the authentication mechanisms used in VLANs (but it is also found in WLANs) – it is what the home user sees when the user has to enter a "wireless access key".
Ethernet, sometimes simply called LAN, is a family of protocols used in wired LANs, described by a set of standards together called IEEE 802.3 published by the Institute of Electrical and Electronics Engineers.
Wireless LAN, also widely known as WLAN or WiFi, is probably the most well-known member of the IEEE 802 protocol family for home users today. It is standardized by IEEE 802.11 and shares many properties with wired Ethernet.
The Internet Protocol Suite, also called TCP/IP, is the foundation of all modern networking. It offers connection-less as well as connection-oriented services over an inherently unreliable network traversed by data-gram transmission at the Internet protocol (IP) level. At its core, the protocol suite defines the addressing, identification, and routing specifications for Internet Protocol Version 4 (IPv4) and for IPv6, the next generation of the protocol with a much enlarged addressing capability.
Synchronous optical networking (SONET) and Synchronous Digital Hierarchy (SDH) are standardized multiplexing protocols that transfer multiple digital bit streams over optical fiber using lasers. They were originally designed to transport circuit mode communications from a variety of different sources, primarily to support real-time, uncompressed, circuit-switched voice encoded in PCM (Pulse-Code Modulation) format. However, due to its protocol neutrality and transport-oriented features, SONET/SDH also was the obvious choice for transporting Asynchronous Transfer Mode (ATM) frames.
Asynchronous Transfer Mode (ATM) is a switching technique for telecommunication networks. It uses asynchronous time-division multiplexing and encodes data into small, fixed-sized cells. This differs from other protocols such as the Internet Protocol Suite or Ethernet that use variable sized packets or frames. ATM has similarity with both circuit and packet switched networking. This makes it a good choice for a network that must handle both traditional high-throughput data traffic, and real-time, low-latency content such as voice and video. ATM uses a connection-oriented model in which a virtual circuit must be established between two endpoints before the actual data exchange begins.
While the role of ATM is diminishing in favor of next-generation networks, it still plays a role in the last mile, which is the connection between an Internet service provider and the home user. [359]
There are a number of different digital cellular standards, including: Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), cdmaOne, CDMA2000, Evolution-Data Optimized (EV-DO), Enhanced Data Rates for GSM Evolution (EDGE), Universal Mobile Telecommunications System (UMTS), Digital Enhanced Cordless Telecommunications (DECT), Digital AMPS (IS-136/TDMA), and Integrated Digital Enhanced Network (iDEN). [360]
Computer network types by scale |
---|
![]() |
A network can be characterized by its physical capacity or its organizational purpose. Use of the network, including user authorization and access rights, differ accordingly.
A nanoscale communication network has key components implemented at the nanoscale including message carriers and leverages physical principles that differ from macroscale communication mechanisms. Nanoscale communication extends communication to very small sensors and actuators such as those found in biological systems and also tends to operate in environments that would be too harsh for classical communication. [361]
A personal area network (PAN) is a computer network used for communication among computer and different information technological devices close to one person. Some examples of devices that are used in a PAN are personal computers, printers, fax machines, telephones, PDAs, scanners, and even video game consoles. A PAN may include wired and wireless devices. The reach of a PAN typically extends to 10 meters. [362] A wired PAN is usually constructed with USB and FireWire connections while technologies such as Bluetooth and infrared communication typically form a wireless PAN.
A local area network (LAN) is a network that connects computers and devices in a limited geographical area such as a home, school, office building, or closely positioned group of buildings. Each computer or device on the network is a node. Wired LANs are most likely based on Ethernet technology. Newer standards such as ITU-T G.hn also provide a way to create a wired LAN using existing wiring, such as coaxial cables, telephone lines, and power lines. [363]
The defining characteristics of a LAN, in contrast to a wide area network (WAN), include higher data transfer rates, limited geographic range, and lack of reliance on leased lines to provide connectivity. Current Ethernet or other IEEE 802.3 LAN technologies operate at data transfer rates up to 100 Gbit/s, standardized by IEEE in 2010. [364] Currently, 400 Gbit/s Ethernet is being developed.
A LAN can be connected to a WAN using a router.
A home area network (HAN) is a residential LAN used for communication between digital devices typically deployed in the home, usually a small number of personal computers and accessories, such as printers and mobile computing devices. An important function is the sharing of Internet access, often a broadband service through a cable TV or digital subscriber line (DSL) provider.
A storage area network (SAN) is a dedicated network that provides access to consolidated, block level data storage. SANs are primarily used to make storage devices, such as disk arrays, tape libraries, and optical jukeboxes, accessible to servers so that the devices appear like locally attached devices to the operating system. A SAN typically has its own network of storage devices that are generally not accessible through the local area network by other devices. The cost and complexity of SANs dropped in the early 2000s to levels allowing wider adoption across both enterprise and small to medium-sized business environments.
A campus area network (CAN) is made up of an interconnection of LANs within a limited geographical area. The networking equipment (switches, routers) and transmission media (optical fiber, copper plant, Cat5 cabling, etc.) are almost entirely owned by the campus tenant / owner (an enterprise, university, government, etc.).
For example, a university campus network is likely to link a variety of campus buildings to connect academic colleges or departments, the library, and student residence halls.
A backbone network is part of a computer network infrastructure that provides a path for the exchange of information between different LANs or sub-networks. A backbone can tie together diverse networks within the same building, across different buildings, or over a wide area.
For example, a large company might implement a backbone network to connect departments that are located around the world. The equipment that ties together the departmental networks constitutes the network backbone. When designing a network backbone, network performance and network congestion are critical factors to take into account. Normally, the backbone network's capacity is greater than that of the individual networks connected to it.
Another example of a backbone network is the Internet backbone, which is the set of wide area networks (WANs) and core routers that tie together all networks connected to the Internet.
A Metropolitan area network (MAN) is a large computer network that usually spans a city or a large campus.
A wide area network (WAN) is a computer network that covers a large geographic area such as a city, country, or spans even intercontinental distances. A WAN uses a communications channel that combines many types of media such as telephone lines, cables, and air waves. A WAN often makes use of transmission facilities provided by common carriers, such as telephone companies. WAN technologies generally function at the lower three layers of the OSI reference model: the physical layer, the data link layer, and the network layer.
An enterprise private network is a network that a single organization builds to interconnect its office locations (e.g., production sites, head offices, remote offices, shops) so they can share computer resources.
A virtual private network (VPN) is an overlay network in which some of the links between nodes are carried by open connections or virtual circuits in some larger network (e.g., the Internet) instead of by physical wires. The data link layer protocols of the virtual network are said to be tunneled through the larger network when this is the case. One common application is secure communications through the public Internet, but a VPN need not have explicit security features, such as authentication or content encryption. VPNs, for example, can be used to separate the traffic of different user communities over an underlying network with strong security features.
VPN may have best-effort performance, or may have a defined service level agreement (SLA) between the VPN customer and the VPN service provider. Generally, a VPN has a topology more complex than point-to-point.
A global area network (GAN) is a network used for supporting mobile across an arbitrary number of wireless LANs, satellite coverage areas, etc. The key challenge in mobile communications is handing off user communications from one local coverage area to the next. In IEEE Project 802, this involves a succession of terrestrial wireless LANs. [365]
Networks are typically managed by the organizations that own them. Private enterprise networks may use a combination of intranets and extranets. They may also provide network access to the Internet, which has no single owner and permits virtually unlimited global connectivity.
An intranet is a set of networks that are under the control of a single administrative entity. The intranet uses the IP protocol and IP-based tools such as web browsers and file transfer applications. The administrative entity limits use of the intranet to its authorized users. Most commonly, an intranet is the internal LAN of an organization. A large intranet typically has at least one web server to provide users with organizational information. An intranet is also anything behind the router on a local area network.
An extranet is a network that is also under the administrative control of a single organization, but supports a limited connection to a specific external network. For example, an organization may provide access to some aspects of its intranet to share data with its business partners or customers. These other entities are not necessarily trusted from a security standpoint. Network connection to an extranet is often, but not always, implemented via WAN technology.
An internetwork is the connection of multiple computer networks via a common routing technology using routers.
The Internet is the largest example of an internetwork. It is a global system of interconnected governmental, academic, corporate, public, and private computer networks. It is based on the networking technologies of the Internet Protocol Suite. It is the successor of the Advanced Research Projects Agency Network (ARPANET) developed by DARPA of the United States Department of Defense. The Internet is also the communications backbone underlying the World Wide Web (WWW).
Participants in the Internet use a diverse array of methods of several hundred documented, and often standardized, protocols compatible with the Internet Protocol Suite and an addressing system ( IP addresses) administered by the Internet Assigned Numbers Authority and address registries. Service providers and large enterprises exchange information about the reachability of their address spaces through the Border Gateway Protocol (BGP), forming a redundant worldwide mesh of transmission paths.
A darknet is an overlay network, typically running on the Internet, that is only accessible through specialized software. A darknet is an anonymizing network where connections are made only between trusted peers — sometimes called "friends" ( F2F) [366] — using non-standard protocols and ports.
Darknets are distinct from other distributed peer-to-peer networks as sharing is anonymous (that is, IP addresses are not publicly shared), and therefore users can communicate with little fear of governmental or corporate interference. [367]
Routing is the process of selecting network paths to carry network traffic. Routing is performed for many kinds of networks, including circuit switching networks and packet switched networks.
In packet switched networks, routing directs packet forwarding (the transit of logically addressed network packets from their source toward their ultimate destination) through intermediate nodes. Intermediate nodes are typically network hardware devices such as routers, bridges, gateways, firewalls, or switches. General-purpose computers can also forward packets and perform routing, though they are not specialized hardware and may suffer from limited performance. The routing process usually directs forwarding on the basis of routing tables, which maintain a record of the routes to various network destinations. Thus, constructing routing tables, which are held in the router's memory, is very important for efficient routing.
There are usually multiple routes that can be taken, and to choose between them, different elements can be considered to decide which routes get installed into the routing table, such as (sorted by priority):
Most routing algorithms use only one network path at a time. Multipath routing techniques enable the use of multiple alternative paths.
Routing, in a more narrow sense of the term, is often contrasted with bridging in its assumption that network addresses are structured and that similar addresses imply proximity within the network. Structured addresses allow a single routing table entry to represent the route to a group of devices. In large networks, structured addressing (routing, in the narrow sense) outperforms unstructured addressing (bridging). Routing has become the dominant form of addressing on the Internet. Bridging is still widely used within localized environments.
Network services are applications hosted by servers on a computer network, to provide some functionality for members or users of the network, or to help the network itself to operate.
The World Wide Web, E-mail, [368] printing and network file sharing are examples of well-known network services. Network services such as DNS ( Domain Name System) give names for IP and MAC addresses (people remember names like “nm.lan” better than numbers like “210.121.67.18”), [369] and DHCP to ensure that the equipment on the network has a valid IP address. [370]
Services are usually based on a service protocol that defines the format and sequencing of messages between clients and servers of that network service.
Depending on the installation requirements, network performance is usually measured by the quality of service of a telecommunications product. The parameters that affect this typically can include throughput, jitter, bit error rate and latency.
The following list gives examples of network performance measures for a circuit-switched network and one type of packet-switched network, viz. ATM:
There are many ways to measure the performance of a network, as each network is different in nature and design. Performance can also be modelled instead of measured. For example, state transition diagrams are often used to model queuing performance in a circuit-switched network. The network planner uses these diagrams to analyze how the network performs in each state, ensuring that the network is optimally designed. [373]
Network congestion occurs when a link or node is carrying so much data that its quality of service deteriorates. Typical effects include queueing delay, packet loss or the blocking of new connections. A consequence of these latter two is that incremental increases in offered load lead either only to small increase in network throughput, or to an actual reduction in network throughput.
Network protocols that use aggressive retransmissions to compensate for packet loss tend to keep systems in a state of network congestion—even after the initial load is reduced to a level that would not normally induce network congestion. Thus, networks using these protocols can exhibit two stable states under the same level of load. The stable state with low throughput is known as congestive collapse.
Modern networks use congestion control, congestion avoidance and traffic control techniques to try to avoid congestion collapse. These include: exponential backoff in protocols such as 802.11's CSMA/CA and the original Ethernet, window reduction in TCP, and fair queueing in devices such as routers. Another method to avoid the negative effects of network congestion is implementing priority schemes, so that some packets are transmitted with higher priority than others. Priority schemes do not solve network congestion by themselves, but they help to alleviate the effects of congestion for some services. An example of this is 802.1p. A third method to avoid network congestion is the explicit allocation of network resources to specific flows. One example of this is the use of Contention-Free Transmission Opportunities (CFTXOPs) in the ITU-T G.hn standard, which provides high-speed (up to 1 Gbit/s) Local area networking over existing home wires (power lines, phone lines and coaxial cables).
For the Internet, RFC 2914 addresses the subject of congestion control in detail.
Network resilience is "the ability to provide and maintain an acceptable level of service in the face of faults and challenges to normal operation.” [374]
Network security consists of provisions and policies adopted by the network administrator to prevent and monitor unauthorized access, misuse, modification, or denial of the computer network and its network-accessible resources. [375] Network security is the authorization of access to data in a network, which is controlled by the network administrator. Users are assigned an ID and password that allows them access to information and programs within their authority. Network security is used on a variety of computer networks, both public and private, to secure daily transactions and communications among businesses, government agencies and individuals.
Network surveillance is the monitoring of data being transferred over computer networks such as the Internet. The monitoring is often done surreptitiously and may be done by or at the behest of governments, by corporations, criminal organizations, or individuals. It may or may not be legal and may or may not require authorization from a court or other independent agency.
Computer and network surveillance programs are widespread today, and almost all Internet traffic is or could potentially be monitored for clues to illegal activity.
Surveillance is very useful to governments and law enforcement to maintain social control, recognize and monitor threats, and prevent/investigate criminal activity. With the advent of programs such as the Total Information Awareness program, technologies such as high speed surveillance computers and biometrics software, and laws such as the Communications Assistance For Law Enforcement Act, governments now possess an unprecedented ability to monitor the activities of citizens. [376]
However, many civil rights and privacy groups—such as Reporters Without Borders, the Electronic Frontier Foundation, and the American Civil Liberties Union—have expressed concern that increasing surveillance of citizens may lead to a mass surveillance society, with limited political and personal freedoms. Fears such as this have led to numerous lawsuits such as Hepting v. AT&T. [376] [377] The hacktivist group Anonymous has hacked into government websites in protest of what it considers "draconian surveillance". [378] [379]
End-to-end encryption (E2EE) is a digital communications paradigm of uninterrupted protection of data traveling between two communicating parties. It involves the originating party encrypting data so only the intended recipient can decrypt it, with no dependency on third parties. End-to-end encryption prevents intermediaries, such as Internet providers or application service providers, from discovering or tampering with communications. End-to-end encryption generally protects both confidentiality and integrity.
Examples of end-to-end encryption include HTTPS for web traffic, PGP for email, OTR for instant messaging, ZRTP for telephony, and TETRA for radio.
Typical server-based communications systems do not include end-to-end encryption. These systems can only guarantee protection of communications between clients and servers, not between the communicating parties themselves. Examples of non-E2EE systems are Google Talk, Yahoo Messenger, Facebook, and Dropbox. Some such systems, for example LavaBit and SecretInk, have even described themselves as offering "end-to-end" encryption when they do not. Some systems that normally offer end-to-end encryption have turned out to contain a back door that subverts negotiation of the encryption key between the communicating parties, for example Skype or Hushmail.
The end-to-end encryption paradigm does not directly address risks at the communications endpoints themselves, such as the technical exploitation of clients, poor quality random number generators, or key escrow. E2EE also does not address traffic analysis, which relates to things such as the identities of the end points and the times and quantities of messages that are sent.
The introduction and rapid growth of e-commerce on the world wide web in the mid-1990s made it obvious that some form of authentication and encryption was needed. Netscape took the first shot at a new standard. At the time, the dominant web browser was Netscape Navigator. Netscape created a standard called secure socket layer (SSL). SSL requires a server with a certificate. When a client requests access to an SSL-secured server, the server sends a copy of the certificate to the client. The SSL client checks this certificate (all web browsers come with an exhaustive list of CA root certificates preloaded), and if the certificate checks out, the server is authenticated and the client negotiates a symmetric-key cipher for use in the session. The session is now in a very secure encrypted tunnel between the SSL server and the SSL client. [351]
Users and network administrators typically have different views of their networks. Users can share printers and some servers from a workgroup, which usually means they are in the same geographic location and are on the same LAN, whereas a Network Administrator is responsible to keep that network up and running. A community of interest has less of a connection of being in a local area, and should be thought of as a set of arbitrarily located users who share a set of servers, and possibly also communicate via peer-to-peer technologies.
Network administrators can see networks from both physical and logical perspectives. The physical perspective involves geographic locations, physical cabling, and the network elements (e.g., routers, bridges and application layer gateways) that interconnect via the transmission media. Logical networks, called, in the TCP/IP architecture, subnets, map onto one or more transmission media. For example, a common practice in a campus of buildings is to make a set of LAN cables in each building appear to be a common subnet, using virtual LAN (VLAN) technology.
Both users and administrators are aware, to varying extents, of the trust and scope characteristics of a network. Again using TCP/IP architectural terminology, an intranet is a community of interest under private administration usually by an enterprise, and is only accessible by authorized users (e.g. employees). [380] Intranets do not have to be connected to the Internet, but generally have a limited connection. An extranet is an extension of an intranet that allows secure communications to users outside of the intranet (e.g. business partners, customers). [380]
Unofficially, the Internet is the set of users, enterprises, and content providers that are interconnected by Internet Service Providers (ISP). From an engineering viewpoint, the Internet is the set of subnets, and aggregates of subnets, which share the registered IP address space and exchange information about the reachability of those IP addresses using the Border Gateway Protocol. Typically, the human-readable names of servers are translated to IP addresses, transparently to users, via the directory function of the Domain Name System (DNS).
Over the Internet, there can be business-to-business (B2B), business-to-consumer (B2C) and consumer-to-consumer (C2C) communications. When money or sensitive information is exchanged, the communications are apt to be protected by some form of communications security mechanism. Intranets and extranets can be securely superimposed onto the Internet, without any access by general Internet users and administrators, using secure Virtual Private Network (VPN) technology.
The Internet is the global system of interconnected computer networks that use the Internet protocol suite (TCP/IP) to link devices worldwide. It is a network of networks that consists of private, public, academic, business, and government networks of local to global scope, linked by a broad array of electronic, wireless, and optical networking technologies. The Internet carries a vast range of information resources and services, such as the inter-linked hypertext documents and applications of the World Wide Web (WWW), electronic mail, telephony, and file sharing.
The origins of the Internet date back to research commissioned by the federal government of the United States in the 1960s to build robust, fault-tolerant communication with computer networks. [381] The primary precursor network, the ARPANET, initially served as a backbone for interconnection of regional academic and military networks in the 1980s. The funding of the National Science Foundation Network as a new backbone in the 1980s, as well as private funding for other commercial extensions, led to worldwide participation in the development of new networking technologies, and the merger of many networks. [382] The linking of commercial networks and enterprises by the early 1990s marks the beginning of the transition to the modern Internet, [383] and generated a sustained exponential growth as generations of institutional, personal, and mobile computers were connected to the network. Although the Internet was widely used by academia since the 1980s, the commercialization incorporated its services and technologies into virtually every aspect of modern life.
Most traditional communications media, including telephony, radio, television, paper mail and newspapers are reshaped, redefined, or even bypassed by the Internet, giving birth to new services such as email, Internet telephony, Internet television, online music, digital newspapers, and video streaming websites. Newspaper, book, and other print publishing are adapting to website technology, or are reshaped into blogging, web feeds and online news aggregators. The Internet has enabled and accelerated new forms of personal interactions through instant messaging, Internet forums, and social networking. Online shopping has grown exponentially both for major retailers and small businesses and entrepreneurs, as it enables firms to extend their " brick and mortar" presence to serve a larger market or even sell goods and services entirely online. Business-to-business and financial services on the Internet affect supply chains across entire industries.
The Internet has no centralized governance in either technological implementation or policies for access and usage; each constituent network sets its own policies. [384] Only the overreaching definitions of the two principal name spaces in the Internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. [385]
When the term Internet is used to refer to the specific global system of interconnected Internet Protocol (IP) networks, the word is a proper noun [386] that should be written with an initial capital letter. In common use and the media, it is often erroneously not capitalized, viz. the internet. Some guides specify that the word should be capitalized when used as a noun, but not capitalized when used as an adjective. [387] The Internet is also often referred to as the Net, as a short form of network. Historically, as early as 1849, the word internetted was used uncapitalized as an adjective, meaning interconnected or interwoven. [388] The designers of early computer networks used internet both as a noun and as a verb in shorthand form of internetwork or internetworking, meaning interconnecting computer networks. [389]
The terms Internet and World Wide Web are often used interchangeably in everyday speech; it is common to speak of "going on the Internet" when using a web browser to view web pages. However, the World Wide Web or the Web is only one of a large number of Internet services. The Web is a collection of interconnected documents (web pages) and other web resources, linked by hyperlinks and URLs. [390] As another point of comparison, Hypertext Transfer Protocol, or HTTP, is the language used on the Web for information transfer, yet it is just one of many languages or protocols that can be used for communication on the Internet. [391] The term Interweb is a portmanteau of Internet and World Wide Web typically used sarcastically to parody a technically unsavvy user.
Research into packet switching, one of the fundamental Internet technologies started in the early 1960s in the work of Paul Baran, [392] and packet switched networks such as the NPL network by Donald Davies, [393] ARPANET, Tymnet, the Merit Network, [394] Telenet, and CYCLADES, [395] [396] were developed in the late 1960s and 1970s using a variety of protocols. [397] The ARPANET project led to the development of protocols for internetworking, by which multiple separate networks could be joined into a network of networks. [398] ARPANET development began with two network nodes which were interconnected between the Network Measurement Center at the University of California, Los Angeles (UCLA) Henry Samueli School of Engineering and Applied Science directed by Leonard Kleinrock, and the NLS system at SRI International (SRI) by Douglas Engelbart in Menlo Park, California, on 29 October 1969. [399] The third site was the Culler-Fried Interactive Mathematics Center at the University of California, Santa Barbara, followed by the University of Utah Graphics Department. In an early sign of future growth, fifteen sites were connected to the young ARPANET by the end of 1971. [400] [401] These early years were documented in the 1972 film Computer Networks: The Heralds of Resource Sharing.
Early international collaborations on the ARPANET were rare. European developers were concerned with developing the X.25 networks. [402] Notable exceptions were the Norwegian Seismic Array ( NORSAR) in June 1973, followed in 1973 by Sweden with satellite links to the Tanum Earth Station and Peter T. Kirstein's research group in the United Kingdom, initially at the Institute of Computer Science, University of London and later at University College London. [403] [404] [405] In December 1974, RFC 675 (Specification of Internet Transmission Control Program), by Vinton Cerf, Yogen Dalal, and Carl Sunshine, used the term internet as a shorthand for internetworking and later RFCs repeated this use. [406] Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized, which permitted worldwide proliferation of interconnected networks.
TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNet) provided access to supercomputer sites in the United States for researchers, first at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s. [407] Commercial Internet service providers (ISPs) emerged in the late 1980s and early 1990s. The ARPANET was decommissioned in 1990. By 1995, the Internet was fully commercialized in the U.S. when the NSFNet was decommissioned, removing the last restrictions on use of the Internet to carry commercial traffic. [408] The Internet rapidly expanded in Europe and Australia in the mid to late 1980s [409] [410] and to Asia in the late 1980s and early 1990s. [411] The beginning of dedicated transatlantic communication between the NSFNET and networks in Europe was established with a low-speed satellite relay between Princeton University and Stockholm, Sweden in December 1988. [412] Although other network protocols such as UUCP had global reach well before this time, this marked the beginning of the Internet as an intercontinental network.
Public commercial use of the Internet began in mid-1989 with the connection of MCI Mail and Compuserve's email capabilities to the 500,000 users of the Internet. [413] Just months later on 1 January 1990, PSInet launched an alternate Internet backbone for commercial use; one of the networks that would grow into the commercial Internet we know today. In March 1990, the first high-speed T1 (1.5 Mbit/s) link between the NSFNET and Europe was installed between Cornell University and CERN, allowing much more robust communications than were capable with satellites. [414] Six months later Tim Berners-Lee would begin writing WorldWideWeb, the first web browser after two years of lobbying CERN management. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9, [415] the HyperText Markup Language (HTML), the first Web browser (which was also a HTML editor and could access Usenet newsgroups and FTP files), the first HTTP server software (later known as CERN httpd), the first web server, [416] and the first Web pages that described the project itself. In 1991 the Commercial Internet eXchange was founded, allowing PSInet to communicate with the other commercial networks CERFnet and Alternet. Since 1995 the Internet has tremendously impacted culture and commerce, including the rise of near instant communication by email, instant messaging, telephony ( Voice over Internet Protocol or VoIP), two-way interactive video calls, and the World Wide Web [417] with its discussion forums, blogs, social networking, and online shopping sites. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1-Gbit/s, 10-Gbit/s, or more.
2005 | 2010 | 2017 | 2023 | |
---|---|---|---|---|
World population (billions) [419] | 6.5 | 6.9 | 7.4 | 8.0 |
Worldwide | 16% | 30% | 48% | 67% |
In developing world | 8% | 21% | 41.3% | 60% |
In developed world | 51% | 67% | 81% | 93% |
The Internet continues to grow, driven by ever greater amounts of online information and knowledge, commerce, entertainment and social networking. [420] During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%. [421] This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network. [422] As of 31 March 2011, the estimated total number of Internet users was 2.095 billion (30.2% of world population). [423] It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication, by 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet. [424]
The Internet is a global network that comprises many voluntarily interconnected autonomous networks. It operates without a central governing body. The technical underpinning and standardization of the core protocols ( IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. To maintain interoperability, the principal name spaces of the Internet are administered by the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. ICANN coordinates the assignment of unique identifiers for use on the Internet, including domain names, Internet Protocol (IP) addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces are essential for maintaining the global reach of the Internet. This role of ICANN distinguishes it as perhaps the only central coordinating body for the global Internet. [425]
Regional Internet Registries (RIRs) allocate IP addresses:
The National Telecommunications and Information Administration, an agency of the United States Department of Commerce, had final approval over changes to the DNS root zone until the IANA stewardship transition on 1 October 2016. [426] [427] [428] [429] The Internet Society (ISOC) was founded in 1992 with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". [430] Its members include individuals (anyone may join) as well as corporations, organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the Internet Engineering Task Force (IETF), Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG). On 16 November 2005, the United Nations-sponsored World Summit on the Information Society in Tunis established the Internet Governance Forum (IGF) to discuss Internet-related issues.
The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture.
Internet service providers establish the worldwide connectivity between individual networks at various levels of scope. End-users who only access the Internet when needed to perform a function or obtain information, represent the bottom of the routing hierarchy. At the top of the routing hierarchy are the tier 1 networks, large telecommunication companies that exchange traffic directly with each other via peering agreements. Tier 2 and lower level networks buy Internet transit from other providers to reach at least some parties on the global Internet, though they may also engage in peering. An ISP may use a single upstream provider for connectivity, or implement multihoming to achieve redundancy and load balancing. Internet exchange points are major traffic exchanges with physical connections to multiple ISPs. Large organizations, such as academic institutions, large enterprises, and governments, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect with large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET. Both the Internet IP routing structure and hypertext links of the World Wide Web are examples of scale-free networks. [431] Computers and routers use routing tables in their operating system to direct IP packets to the next-hop router or destination. Routing tables are maintained by manual configuration or automatically by routing protocols. End-nodes typically use a default route that points toward an ISP providing transit, while ISP routers use the Border Gateway Protocol to establish the most efficient routing across the complex connections of the global Internet.
An estimated 70 percent of the world's Internet traffic passes through Ashburn, Virginia. [432] [433] [434] [435]
Common methods of Internet access by users include dial-up with a computer modem via telephone circuits, broadband over coaxial cable, fiber optics or copper wires, Wi-Fi, satellite, and cellular telephone technology (e.g. 3G, 4G). The Internet may often be accessed from computers in libraries and Internet cafes. Internet access points exist in many public places such as airport halls and coffee shops. Various terms are used, such as public Internet kiosk, public access terminal, and Web payphone. Many hotels also have public terminals that are usually fee-based. These terminals are widely accessed for various usages, such as ticket booking, bank deposit, or online payment. Wi-Fi provides wireless access to the Internet via local computer networks. Hotspots providing such access include Wi-Fi cafes, where users need to bring their own wireless devices such as a laptop or PDA. These services may be free to all, free to customers only, or fee-based.
Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services covering large city areas are in many cities, such as New York, London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. The Internet can then be accessed from places, such as a park bench. [436] Apart from Wi-Fi, there have been experiments with proprietary mobile wireless networks like Ricochet, various high-speed data services over cellular phone networks, and fixed wireless services. High-end mobile phones such as smartphones in general come with Internet access through the phone network. Web browsers such as Opera are available on these advanced handsets, which can also run a wide variety of other Internet software. More mobile phones have Internet access than PCs, although this is not as widely used. [437] An Internet access provider and protocol matrix differentiates the methods used to get online.
Internet protocol suite |
---|
Application layer |
Transport layer |
Internet layer |
Link layer |
While the hardware components in the Internet infrastructure can often be used to support other software systems, it is the design and the standardization process of the software that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been assumed by the Internet Engineering Task Force (IETF). [438] The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. Resulting contributions and standards are published as Request for Comments (RFC) documents on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices (BCP) when implementing Internet technologies.
The Internet standards describe a framework known as the Internet protocol suite. This is a model architecture that divides methods into a layered system of protocols, originally documented in RFC 1122 and RFC 1123. The layers correspond to the environment or scope in which their services operate. At the top is the application layer, space for the application-specific networking methods used in software applications. For example, a web browser program uses the client-server application model and a specific protocol of interaction between servers and clients, while many file-sharing systems use a peer-to-peer paradigm. Below this top layer, the transport layer connects applications on different hosts with a logical channel through the network with appropriate data exchange methods.
Underlying these layers are the networking technologies that interconnect networks at their borders and exchange traffic across them. The Internet layer enables computers to identify and locate each other via Internet Protocol (IP) addresses, and routes their traffic via intermediate (transit) networks. Last, at the bottom of the architecture is the link layer, which provides logical connectivity between hosts on the same network link, such as a local area network (LAN) or a dial-up connection. The model, also known as TCP/IP, is designed to be independent of the underlying hardware used for the physical connections, which the model does not concern itself with in any detail. Other models have been developed, such as the OSI model, that attempt to be comprehensive in every aspect of communications. While many similarities exist between the models, they are not compatible in the details of description or implementation. Yet, TCP/IP protocols are usually included in the discussion of OSI networking.
The most prominent component of the Internet model is the Internet Protocol (IP), which provides addressing systems, including IP addresses, for computers on the network. IP enables internetworking and, in essence, establishes the Internet itself. Internet Protocol Version 4 (IPv4) is the initial version used on the first generation of the Internet and is still in dominant use. It was designed to address up to ~4.3 billion (109) hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011, [439] when the global address allocation pool was exhausted. A new protocol version, IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 is currently in growing deployment around the world, since Internet address registries ( RIRs) began to urge all resource managers to plan rapid adoption and conversion. [440]
IPv6 is not directly interoperable by design with IPv4. In essence, it establishes a parallel version of the Internet not directly accessible with IPv4 software. Thus, translation facilities must exist for internetworking or nodes must have duplicate networking software for both networks. Essentially all modern computer operating systems support both versions of the Internet Protocol. Network infrastructure, however, has been lagging in this development. Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts, e.g., peering agreements, and by technical specifications or protocols that describe the exchange of data over the network. Indeed, the Internet is defined by its interconnections and routing policies.
The Internet carries many network services, most prominently mobile apps such as social media apps, the World Wide Web, electronic mail, multiplayer online games, Internet telephony, and file sharing services.
Many people use, erroneously, the terms Internet and World Wide Web, or just the Web, interchangeably, but the two terms are not synonymous. The World Wide Web is the primary application program that billions of people use on the Internet, and it has changed their lives immeasurably. [441] [442] However, the Internet provides many other services. The Web is a global set of documents, images and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs). URIs symbolically identify services, servers, and other databases, and the documents and resources that they can provide. Hypertext Transfer Protocol (HTTP) is the main access protocol of the World Wide Web. Web services also use HTTP to allow software systems to communicate in order to share and exchange business logic and data.
World Wide Web browser software, such as Microsoft's Internet Explorer/ Edge, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, lets users navigate from one web page to another via hyperlinks embedded in the documents. These documents may also contain any combination of computer data, including graphics, sounds, text, video, multimedia and interactive content that runs while the user is interacting with the page. Client-side software can include animations, games, office applications and scientific demonstrations. Through keyword-driven Internet research using search engines like Yahoo!, Bing and Google, users worldwide have easy, instant access to a vast and diverse amount of online information. Compared to printed media, books, encyclopedias and traditional libraries, the World Wide Web has enabled the decentralization of information on a large scale.
The Web has also enabled individuals and organizations to publish ideas and information to a potentially large audience online at greatly reduced expense and time delay. Publishing a web page, a blog, or building a website involves little initial cost and many cost-free services are available. However, publishing and maintaining large, professional web sites with attractive, diverse and up-to-date information is still a difficult and expensive proposition. Many individuals and some companies and groups use web logs or blogs, which are largely used as easily updatable online diaries. Some commercial organizations encourage staff to communicate advice in their areas of specialization in the hope that visitors will be impressed by the expert knowledge and free information, and be attracted to the corporation as a result.
Advertising on popular web pages can be lucrative, and e-commerce, which is the sale of products and services directly via the Web, continues to grow. Online advertising is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. It includes email marketing, search engine marketing (SEM), social media marketing, many types of display advertising (including web banner advertising), and mobile advertising. In 2011, Internet advertising revenues in the United States surpassed those of cable television and nearly exceeded those of broadcast television. [443]: 19 Many common online advertising practices are controversial and increasingly subject to regulation.
When the Web developed in the 1990s, a typical web page was stored in completed form on a web server, formatted in HTML, complete for transmission to a web browser in response to a request. Over time, the process of creating and serving web pages has become dynamic, creating a flexible design, layout, and content. Websites are often created using content management software with, initially, very little content. Contributors to these systems, who may be paid staff, members of an organization or the public, fill underlying databases with content using editing pages designed for that purpose while casual visitors view and read this content in HTML form. There may or may not be editorial, approval and security systems built into the process of taking newly entered content and making it available to the target visitors.
This section needs additional citations for
verification. (September 2017) |
Email is an important communications service available on the Internet. The concept of sending electronic text messages between parties in a way analogous to mailing letters or memos predates the creation of the Internet. Pictures, documents, and other files are sent as email attachments. Emails can be cc-ed to multiple email addresses.
Internet telephony is another common communications service made possible by the creation of the Internet. VoIP stands for Voice-over- Internet Protocol, referring to the protocol that underlies all Internet communication. The idea began in the early 1990s with walkie-talkie-like voice applications for personal computers. In recent years many VoIP systems have become as easy to use and as convenient as a normal telephone. The benefit is that, as the Internet carries the voice traffic, VoIP can be free or cost much less than a traditional telephone call, especially over long distances and especially for those with always-on Internet connections such as cable or ADSL. VoIP is maturing into a competitive alternative to traditional telephone service. Interoperability between different providers has improved and the ability to call or receive a call from a traditional telephone is available. Simple, inexpensive VoIP network adapters are available that eliminate the need for a personal computer.
Voice quality can still vary from call to call, but is often equal to and can even exceed that of traditional calls. Remaining problems for VoIP include emergency telephone number dialing and reliability. Currently, a few VoIP providers provide an emergency service, but it is not universally available. Older traditional phones with no "extra features" may be line-powered only and operate during a power failure; VoIP can never do so without a backup power source for the phone equipment and the Internet access devices. VoIP has also become increasingly popular for gaming applications, as a form of communication between players. Popular VoIP clients for gaming include Ventrilo and Teamspeak. Modern video game consoles also offer VoIP chat features.
File sharing is an example of transferring large amounts of data across the Internet. A computer file can be emailed to customers, colleagues and friends as an attachment. It can be uploaded to a website or File Transfer Protocol (FTP) server for easy download by others. It can be put into a "shared location" or onto a file server for instant use by colleagues. The load of bulk downloads to many users can be eased by the use of " mirror" servers or peer-to-peer networks. In any of these cases, access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed – usually fully encrypted – across the Internet. The origin and authenticity of the file received may be checked by digital signatures or by MD5 or other message digests. These simple features of the Internet, over a worldwide basis, are changing the production, sale, and distribution of anything that can be reduced to a computer file for transmission. This includes all manner of print publications, software products, news, music, film, video, photography, graphics and the other arts. This in turn has caused seismic shifts in each of the existing industries that previously controlled the production and distribution of these products.
Streaming media is the real-time delivery of digital media for the immediate consumption or enjoyment by end users. Many radio and television broadcasters provide Internet feeds of their live audio and video productions. They may also allow time-shift viewing or listening such as Preview, Classic Clips and Listen Again features. These providers have been joined by a range of pure Internet "broadcasters" who never had on-air licenses. This means that an Internet-connected device, such as a computer or something more specific, can be used to access on-line media in much the same way as was previously possible only with a television or radio receiver. The range of available types of content is much wider, from specialized technical webcasts to on-demand popular multimedia services. Podcasting is a variation on this theme, where – usually audio – material is downloaded and played back on a computer or shifted to a portable media player to be listened to on the move. These techniques using simple equipment allow anybody, with little censorship or licensing control, to broadcast audio-visual material worldwide.
Digital media streaming increases the demand for network bandwidth. For example, standard image quality needs 1 Mbit/s link speed for SD 480p, HD 720p quality requires 2.5 Mbit/s, and the top-of-the-line HDX quality needs 4.5 Mbit/s for 1080p. [444]
Webcams are a low-cost extension of this phenomenon. While some webcams can give full-frame-rate video, the picture either is usually small or updates slowly. Internet users can watch animals around an African waterhole, ships in the Panama Canal, traffic at a local roundabout or monitor their own premises, live and in real time. Video chat rooms and video conferencing are also popular with many uses being found for personal webcams, with and without two-way sound. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with a vast number of users. It uses a flash-based web player to stream and show video files. Registered users may upload an unlimited amount of video and build their own personal profile. YouTube claims that its users watch hundreds of millions, and upload hundreds of thousands of videos daily. Currently, YouTube also uses an HTML5 player. [445]
The Internet has enabled new forms of social interaction, activities, and social associations. This phenomenon has given rise to the scholarly study of the sociology of the Internet.
Internet usage has seen tremendous growth. From 2000 to 2009, the number of Internet users globally rose from 394 million to 1.858 billion. [450] By 2010, 22 percent of the world's population had access to computers with 1 billion Google searches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily on YouTube. [451] In 2014 the world's Internet users surpassed 3 billion or 43.6 percent of world population, but two-thirds of the users came from richest countries, with 78.0 percent of Europe countries population using the Internet, followed by 57.4 percent of the Americas. [452]
The prevalent language for communication on the Internet has been English. This may be a result of the origin of the Internet, as well as the language's role as a lingua franca. Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet.
After English (27%), the most requested languages on the World Wide Web are Chinese (25%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%). [448] By region, 42% of the world's Internet users are based in Asia, 24% in Europe, 14% in North America, 10% in Latin America and the Caribbean taken together, 6% in Africa, 3% in the Middle East and 1% in Australia/Oceania. [453] The Internet's technologies have developed enough in recent years, especially in the use of Unicode, that good facilities are available for development and communication in the world's widely used languages. However, some glitches such as mojibake (incorrect display of some languages' characters) still remain.
In an American study in 2005, the percentage of men using the Internet was very slightly ahead of the percentage of women, although this difference reversed in those under 30. Men logged on more often, spent more time online, and were more likely to be broadband users, whereas women tended to make more use of opportunities to communicate (such as email). Men were more likely to use the Internet to pay bills, participate in auctions, and for recreation such as downloading music and videos. Men and women were equally likely to use the Internet for shopping and banking. [454] More recent studies indicate that in 2008, women significantly outnumbered men on most social networking sites, such as Facebook and Myspace, although the ratios varied with age. [455] In addition, women watched more streaming content, whereas men downloaded more. [456] In terms of blogs, men were more likely to blog in the first place; among those who blog, men were more likely to have a professional blog, whereas women were more likely to have a personal blog. [457]
Forecasts predict that 44% of the world's population will be users of the Internet by 2020. [458] Splitting by country, in 2012 Iceland, Norway, Sweden, the Netherlands, and Denmark had the highest Internet penetration by the number of users, with 93% or more of the population with access. [459]
Several neologisms exist that refer to Internet users: Netizen (as in as in "citizen of the net") [460] refers to those actively involved in improving online communities, the Internet in general or surrounding political affairs and rights such as free speech, [461] [462] Internaut refers to operators or technically highly capable users of the Internet, [463] [464] digital citizen refers to a person using the Internet in order to engage in society, politics, and government participation. [465]
The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet wirelessly. Within the limitations imposed by small screens and other limited facilities of such pocket-sized devices, the services of the Internet, including email and the web, may be available. Service providers may restrict the services offered and mobile data charges may be significantly higher than other access methods.
Educational material at all levels from pre-school to post-doctoral is available from websites. Examples range from CBeebies, through school and high-school revision guides and virtual universities, to access to top-end scholarly literature through the likes of Google Scholar. For distance education, help with homework and other assignments, self-guided learning, whiling away spare time, or just looking up more detail on an interesting fact, it has never been easier for people to access educational information at any level from anywhere. The Internet in general and the World Wide Web in particular are important enablers of both formal and informal education. Further, the Internet allows universities, in particular, researchers from the social and behavioral sciences, to conduct research remotely via virtual laboratories, with profound changes in reach and generalizability of findings as well as in communication between scientists and in the publication of results. [466]
The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have made collaborative work dramatically easier, with the help of collaborative software. Not only can a group cheaply communicate and share ideas but the wide reach of the Internet allows such groups more easily to form. An example of this is the free software movement, which has produced, among other things, Linux, Mozilla Firefox, and OpenOffice.org (later forked into LibreOffice). Internet chat, whether using an IRC chat room, an instant messaging system, or a social networking website, allows colleagues to stay in touch in a very convenient way while working at their computers during the day. Messages can be exchanged even more quickly and conveniently than via email. These systems may allow files to be exchanged, drawings and images to be shared, or voice and video contact between team members.
Content management systems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other's work. Business and project teams can share calendars as well as documents and other information. Such collaboration occurs in a wide variety of areas including scientific research, software development, conference planning, political activism and creative writing. Social and political collaboration is also becoming more widespread as both Internet access and computer literacy spread.
The Internet allows computer users to remotely access other computers and information stores easily from any access point. Access may be with computer security, i.e. authentication and encryption technologies, depending on the requirements. This is encouraging new ways of working from home, collaboration and information sharing in many industries. An accountant sitting at home can audit the books of a company based in another country, on a server situated in a third country that is remotely maintained by IT specialists in a fourth. These accounts could have been created by home-working bookkeepers, in other remote locations, based on information emailed to them from offices all over the world. Some of these things were possible before the widespread use of the Internet, but the cost of private leased lines would have made many of them infeasible in practice. An office worker away from their desk, perhaps on the other side of the world on a business trip or a holiday, can access their emails, access their data using cloud computing, or open a remote desktop session into their office PC using a secure virtual private network (VPN) connection on the Internet. This can give the worker complete access to all of their normal files and data, including email and other applications, while away from the office. It has been referred to among system administrators as the Virtual Private Nightmare, [467] because it extends the secure perimeter of a corporate network into remote locations and its employees' homes.
Many people use the World Wide Web to access news, weather and sports reports, to plan and book vacations and to pursue their personal interests. People use chat, messaging and email to make and stay in touch with friends worldwide, sometimes in the same way as some previously had pen pals. Social networking websites such as Facebook, Twitter, and Myspace have created new ways to socialize and interact. Users of these sites are able to add a wide variety of information to pages, to pursue common interests, and to connect with others. It is also possible to find existing acquaintances, to allow communication among existing groups of people. Sites like LinkedIn foster commercial and business connections. YouTube and Flickr specialize in users' videos and photographs. While social networking sites were initially for individuals only, today they are widely used by businesses and other organizations to promote their brands, to market to their customers and to encourage posts to " go viral". "Black hat" social media techniques are also employed by some organizations, such as spam accounts and astroturfing.
A risk for both individuals and organizations writing posts (especially public posts) on social networking websites, is that especially foolish or controversial posts occasionally lead to an unexpected and possibly large-scale backlash on social media from other Internet users. This is also a risk in relation to controversial offline behavior, if it is widely made known. The nature of this backlash can range widely from counter-arguments and public mockery, through insults and hate speech, to, in extreme cases, rape and death threats. The online disinhibition effect describes the tendency of many individuals to behave more stridently or offensively online than they would in person. A significant number of feminist women have been the target of various forms of harassment in response to posts they have made on social media, and Twitter in particular has been criticised in the past for not doing enough to aid victims of online abuse. [468]
For organizations, such a backlash can cause overall brand damage, especially if reported by the media. However, this is not always the case, as any brand damage in the eyes of people with an opposing opinion to that presented by the organization could sometimes be outweighed by strengthening the brand in the eyes of others. Furthermore, if an organization or individual gives in to demands that others perceive as wrong-headed, that can then provoke a counter-backlash.
Some websites, such as Reddit, have rules forbidding the posting of personal information of individuals (also known as doxxing), due to concerns about such postings leading to mobs of large numbers of Internet users directing harassment at the specific individuals thereby identified. In particular, the Reddit rule forbidding the posting of personal information is widely understood to imply that all identifying photos and names must be censored in Facebook screenshots posted to Reddit. However, the interpretation of this rule in relation to public Twitter posts is less clear, and in any case, like-minded people online have many other ways they can use to direct each other's attention to public social media posts they disagree with.
Children also face dangers online such as cyberbullying and approaches by sexual predators, who sometimes pose as children themselves. Children may also encounter material which they may find upsetting, or material which their parents consider to be not age-appropriate. Due to naivety, they may also post personal information about themselves online, which could put them or their families at risk unless warned not to do so. Many parents choose to enable Internet filtering, and/or supervise their children's online activities, in an attempt to protect their children from inappropriate material on the Internet. The most popular social networking websites, such as Facebook and Twitter, commonly forbid users under the age of 13. However, these policies are typically trivial to circumvent by registering an account with a false birth date, and a significant number of children aged under 13 join such sites anyway. Social networking sites for younger children, which claim to provide better levels of protection for children, also exist. [469]
The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic.[ citation needed] Many Internet forums have sections devoted to games and funny videos.[ citation needed] The Internet pornography and online gambling industries have taken advantage of the World Wide Web, and often provide a significant source of advertising revenue for other websites. [470] Although many governments have attempted to restrict both industries' use of the Internet, in general, this has failed to stop their widespread popularity. [471]
Another area of leisure activity on the Internet is multiplayer gaming. [472] This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer. [473] Non-subscribers were limited to certain types of game play or certain games. Many people use the Internet to access and download music, movies and other works for their enjoyment and relaxation. Free and fee-based services exist for all of these activities, using centralized servers and distributed peer-to-peer technologies. Some of these sources exercise more care with respect to the original artists' copyrights than others.
Internet usage has been correlated to users' loneliness. [474] Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the " I am lonely will anyone speak to me" thread.
Cybersectarianism is a new organizational form which involves: "highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in the collective study via email, on-line chat rooms, and web-based message boards." [475] In particular, the British government has raised concerns about the prospect of young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being persuaded to join terrorist groups such as the so-called " Islamic State", and then potentially committing acts of terrorism on returning to Britain after fighting in Syria or Iraq.
Cyberslacking can become a drain on corporate resources; the average UK employee spent 57 minutes a day surfing the Web while at work, according to a 2003 study by Peninsula Business Services. [476] Internet addiction disorder is excessive computer use that interferes with daily life. Nicholas G. Carr believes that Internet use has other effects on individuals, for instance improving skills of scan-reading and interfering with the deep thinking that leads to true creativity. [477]
Electronic business (e-business) encompasses business processes spanning the entire value chain: purchasing, supply chain management, marketing, sales, customer service, and business relationship. E-commerce seeks to add revenue streams using the Internet to build and enhance relationships with clients and partners. According to International Data Corporation, the size of worldwide e-commerce, when global business-to-business and -consumer transactions are combined, equate to $16 trillion for 2013. A report by Oxford Economics adds those two together to estimate the total size of the digital economy at $20.4 trillion, equivalent to roughly 13.8% of global sales. [478]
While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide. [479] Electronic commerce may be responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting in increases in income inequality. [480] [481] [482]
Author Andrew Keen, a long-time critic of the social transformations caused by the Internet, has recently focused on the economic effects of consolidation from Internet businesses. Keen cites a 2013 Institute for Local Self-Reliance report saying brick-and-mortar retailers employ 47 people for every $10 million in sales while Amazon employs only 14. Similarly, the 700-employee room rental start-up Airbnb was valued at $10 billion in 2014, about half as much as Hilton Hotels, which employs 152,000 people. And car-sharing Internet startup Uber employs 1,000 full-time employees and is valued at $18.2 billion, about the same valuation as Avis and Hertz combined, which together employ almost 60,000 people. [483]
Telecommuting is the performance within a traditional worker and employer relationship when it is facilitated by tools such as groupware, virtual private networks, conference calling, videoconferencing, and voice over IP (VOIP) so that work may be performed from any location, most conveniently the worker's home. It can be efficient and useful for companies as it allows workers to communicate over long distances, saving significant amounts of travel time and cost. As broadband Internet connections become commonplace, more workers have adequate bandwidth at home to use these tools to link their home to their corporate intranet and internal communication networks.
Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries. [484] In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work. [485] The United States Patent and Trademark Office uses a wiki to allow the public to collaborate on finding prior art relevant to examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park. [486] The English Wikipedia has the largest user base among wikis on the World Wide Web [487] and ranks in the top 10 among all Web sites in terms of traffic. [488]
The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing for carrying out their mission, having given rise to Internet activism, most notably practiced by rebels in the Arab Spring. [489] [490] The New York Times suggested that social media websites, such as Facebook and Twitter, helped people organize the political revolutions in Egypt, by helping activists organize protests, communicate grievances, and disseminate information. [491]
Many have understood the Internet as an extension of the Habermasian notion of the public sphere, observing how network communication technologies provide something like a global civic forum. However, incidents of politically motivated Internet censorship have now been recorded in many countries, including western democracies.
The spread of low-cost Internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites, such as DonorsChoose and GlobalGiving, allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. Kiva raises funds for local intermediary microfinance organizations which post stories and updates on behalf of the borrowers. Lenders can contribute as little as $25 to loans of their choice, and receive their money back as borrowers repay. Kiva falls short of being a pure peer-to-peer charity, in that loans are disbursed before being funded by lenders and borrowers do not communicate with lenders themselves. [492] [493]
However, the recent spread of low-cost Internet access in developing countries has made genuine international person-to-person philanthropy increasingly feasible. In 2009, the US-based nonprofit Zidisha tapped into this trend to offer the first person-to-person microfinance platform to link lenders and borrowers across international borders without intermediaries. Members can fund loans for as little as a dollar, which the borrowers then use to develop business activities that improve their families' incomes while repaying loans to the members with interest. Borrowers access the Internet via public cybercafes, donated laptops in village schools, and even smart phones, then create their own profile pages through which they share photos and information about themselves and their businesses. As they repay their loans, borrowers continue to share updates and dialogue with lenders via their profile pages. This direct web-based connection allows members themselves to take on many of the communication and recording tasks traditionally performed by local organizations, bypassing geographic barriers and dramatically reducing the cost of microfinance services to the entrepreneurs. [494]
Internet resources, hardware, and software components are the target of criminal or malicious attempts to gain unauthorized control to cause interruptions, commit fraud, engage in blackmail or access private information.
Malicious software used and spread on the Internet includes computer viruses which copy with the help of humans, computer worms which copy themselves automatically, software for denial of service attacks, ransomware, botnets, and spyware that reports on the activity and typing of users. Usually, these activities constitute cybercrime. Defense theorists have also speculated about the possibilities of cyber warfare using similar methods on a large scale.[ citation needed]
The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet. [495] In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law enforcement agencies. [496] [497] [498] Packet capture is the monitoring of data traffic on a computer network. Computers communicate over the Internet by breaking up messages (emails, images, videos, web pages, files, etc.) into small chunks called "packets", which are routed through a network of computers, until they reach their destination, where they are assembled back into a complete "message" again. Packet Capture Appliance intercepts these packets as they are traveling through the network, in order to examine their contents using other programs. A packet capture is an information gathering tool, but not an analysis tool. That is it gathers "messages" but it does not analyze them and figure out what they mean. Other programs are needed to perform traffic analysis and sift through intercepted data looking for important/useful information. Under the Communications Assistance For Law Enforcement Act all U.S. telecommunications providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence agencies to intercept all of their customers' broadband Internet and voice over Internet protocol (VoIP) traffic. [499]
The large amount of data gathered from packet capturing requires surveillance software that filters and reports relevant information, such as the use of certain words or phrases, the access of certain types of web sites, or communicating via email or chat with certain parties. [500] Agencies, such as the Information Awareness Office, NSA, GCHQ and the FBI, spend billions of dollars per year to develop, purchase, implement, and operate systems for interception and analysis of data. [501] Similar systems are operated by Iranian secret police to identify and suppress dissidents. The required hardware and software was allegedly installed by German Siemens AG and Finnish Nokia. [502]
PervasiveSubstantialSelectiveLittle or noneNot classified or no data
Some governments, such as those of Burma, Iran, North Korea, the Mainland China, Saudi Arabia and the United Arab Emirates restrict access to content on the Internet within their territories, especially to political and religious content, with domain name and keyword filters. [508]
In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to contain only known child pornography sites, the content of the list is secret. [509] Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet, but do not mandate filter software. Many free or commercially available software programs, called content-control software are available to users to block offensive websites on individual computers or networks, in order to limit access by children to pornographic material or depiction of violence.
![]() | This section needs expansion. You can help by
adding to it. (July 2014) |
As the Internet is a heterogeneous network, the physical characteristics, including for example the data transfer rates of connections, vary widely. It exhibits emergent phenomena that depend on its large-scale organization.[ citation needed]
An Internet blackout or outage can be caused by local signalling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to a small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. [510] Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% [511] of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests. [512]
In 2011, researchers estimated the energy used by the Internet to be between 170 and 307 GW, less than two percent of the energy used by humanity. This estimate included the energy needed to build, operate, and periodically replace the estimated 750 million laptops, a billion smart phones and 100 million servers worldwide as well as the energy that routers, cell towers, optical switches, Wi-Fi transmitters and cloud storage devices use when transmitting Internet traffic. [513] [514]
{{
cite news}}
: |last1=
has generic name (
help)
15. … alleged to total over $700,000
network vulnerability scans at least quarterly and after any significant change in the network
{{
cite document}}
: Unknown parameter |archivedate=
ignored (
help); Unknown parameter |archiveurl=
ignored (
help); Unknown parameter |url-status=
ignored (
help)
Exclusive: Top secret court order requiring Verizon to hand over all call data shows scale of domestic surveillance under Obama
{{
cite book}}
: Cite has empty unknown parameter: |lay-date=
(
help)
{{
cite book}}
: Cite has empty unknown parameter: |lay-date=
(
help)
{{
cite book}}
: Cite has empty unknown parameter: |lay-date=
(
help)
{{
cite book}}
: Cite has empty unknown parameter: |lay-date=
(
help)
{{
cite book}}
: Cite has empty unknown parameter: |lay-date=
(
help)
Главным делом жизни Китова, увы, не доведенным до практического воплощения, можно считать разработку плана создания компьютерной сети (Единой государственной сети вычислительных центров - ЕГСВЦ) для управления народным хозяйством и одновременно для решения военных задач. Этот план Анатолий Иванович предложил сразу в высшую инстанцию, направив в январе 1959 года письмо генсеку КПСС Никите Хрущеву. Не получив ответа (хотя начинание на словах было поддержано в различных кругах), осенью того же года он заново направляет на самый верх письмо, приложив к нему 200-страничный детальный проект, получивший название 'Красной книги'. [One can regard the magnum opus of Kitov's career as his elaboration of the plan – unfortunately never brought into practical form – for the establishment of a computer network (the Unified State Network of Computer Centres – EGSVTs) for the control of the national economy and simultaneously for the resolution of military tasks. Anatolii Ivanovich presented this plan directly to the highest levels, sending a letter in January 1959 to the General Secretary of the Communist Party of the Soviet Union Nikita Khrushchev. Not receiving a reply (although supported in various circles), in the autumn of the same year he again sent a letter to the very top, appending a 200-page detailed project plan, called the 'Red Book']
{{
cite book}}
: |last=
has numeric name (
help)CS1 maint: multiple names: authors list (
link)
[T]he link (or hyperlink, or Web link) [is] the basic hypertext construct. A link is a connection from one Web resource to another. Although a simple concept, the link has been one of the primary forces driving the success of the Web.
{{
cite journal}}
: Cite journal requires |journal=
(
help)
{{
cite journal}}
: Cite journal requires |journal=
(
help)
{{
cite journal}}
: Cite journal requires |journal=
(
help)
{{
cite web}}
: |last=
has generic name (
help)
{{
cite journal}}
: Cite journal requires |journal=
(
help)CS1 maint: multiple names: authors list (
link)
{{
cite journal}}
: Cite journal requires |journal=
(
help)
{{
cite journal}}
: Cite journal requires |journal=
(
help)