Which of the following is an example of a Trojan that can be used for Website Defacement

What Does Defacement Mean?

Defacement is a form of vandalism in which a website is marked by hackers or crackers who are trying to make their mark. Usually, website defacement is used to mask a bigger crime being committed behind the scenes.

Techopedia Explains Defacement

Website defacement is usually done using SQL injections to log on the administrator’s account. The usual targets for defacement are government organizations and religious websites. These acts are usually perpetrated by activists (or hacktivists) working against the principles and ideals of the sponsoring organization.

Defacement usually occurs on a popular website with many viewers. The vandalism usually contains images of the victim, which are often photo-edited as a joke or to express hatred. This may be done by adding a beard or horns and captions against the person or organization. The hacker then displays his or her pseudo-name for publicity.

There are even online contests in the hacking community to determine who can deface the most websites in a certain amount of time. The websites that have been defaced are forced to go offline to undergo maintenance, causing a loss to the organization in the form of wasted time and effort.

The defacement of a website will also turn off the site's visitors and provide the impression that the defaced website may not be secure and is incapable of protecting its own property.

Non-State Actors in Computer Network Operations

Jason Andress, Steve Winterfeld, in Cyber Warfare (Second Edition), 2014

Patriotic Hackers

Patriotic hackers may actually be reasonably argued to be a subset of hacktivists but are generally tied to national conflicts and can even join into cyber wars as independent players. They use many of the same tools and methods: Web site defacement, DDoS, attacks, and so on but generally act in support of a particular country, or an effort on the part of a country, although not in any officially sponsored sense.

There have also been occasions where such patriot hackers have been rumored to have actually been in the employ of a state and have been paid to carry out their activities. One such occasion in December of 2009 involved the theft and public posting of thousands of emails from the University of East Anglia Climatic research unit. It is believed that the patriot hackers involved in the incident were acting on behalf of Russia in order to discredit the need for reduction in carbon emissions to help fight global warming [3].

Patriot hackers will likely have many of the same motivations as hacktivists, although with a much more nationalistic focus. The activities of patriot hackers may additionally be of a somewhat more sharp and directed nature than those of a hacktivist.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012416672100012X

Political Cyber Attack Comes of Age in 2007

Paulo Shakarian, ... Andrew Ruef, in Introduction to Cyber-Warfare, 2013

Leaving Unwanted Messages: Web Site Defacement

Defacement shares many characteristics with denial of service. Defacement usually applies to Web sites that people view in their browsers, and usually are carried out against Web servers owned by the organization that the attackers have a grievance with.

Web site defacement usually results from a Web server with an exploitable vulnerability. The attacker uses this vulnerability to compromise the Web server and modify its content (i.e., Web pages). As long as the exploitable vulnerability remains present, the attacker can enter at will and change the contents of the Web page to any message they choose.

Web site defacement has the same overall effect as a denial of service in that the intended users of a service cannot use it. It has an additional effect in that it communicates a message of the attackers’ choice to all of the intended users as long as the defaced message remains online. Usually, defacements are relatively easy to remove, but if the original vulnerability is not also mitigated, the attacker can continue to alter the content of the Web server. This can be an even more potent message to the users of a Web site.

Web site defacements used to be quite popular and they have been archived on the Internet.9 However, if attackers can compromise a Web server, they can also access any information that Web server has, which could include valuable information such as usernames and passwords. Where previously, sites would be exploited and defaced, recently, this seems to have been eclipsed by silent compromises of Web sites followed by a gloating disclosure, weeks after the fact, including lists of compromised usernames and e-mail addresses. Typically these disclosures are made on third party Web sites such as pastebin.com or e-mail lists (see Chapter 6 for more details on how hacking groups such as Anonymous use sites such as pastebin.com to publicize stolen information). Another reason for the stealthy compromise of information is to gather intelligence. We discuss this in more detail in Part 2 of this book.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124078147000026

How are Organizations Being Compromised?

Eric Cole, in Advanced Persistent Threat, 2013

What are Attackers After?

In order to understand how an organization is compromised, it is important to understand what the attackers are after and trying to compromise. The traditional threat was mainly about bragging rights. In the later 1990s and early 2000, many attackers focus was on being able to show that they could break into an organization but not do deliberate damage to an organization. During this time period Web site defacements were very popular. This was an easy and visible way to show an organization had been compromised. Other than embarrassment there was no deliberate harm to the organization. Today the APT is mainly focused on disclosure and extraction of critical information or intellectual property. While the goal of APT is to maintain long term access to a site, the main reason for this is the ability to extract information that can be used for the advantage of an adversary.

What has muddied the waters today is we have seen an increase in hactivism where hacking groups are targeting an organization or country try to make a point or stop them from doing something. In these cases public embarrassment and reputation damage are the goals. This means there has to be a public or visible component to the attack. These attacks are not typically classified as APT because they often involve the standard customary way of breaking into an organization. Since one of the goals of APT is not to get caught, being stealthy is the name of the game and they do not want an organization to find out or know what they are doing. Therefore the APT normally does not have an obvious visible component to the attack. The goal of the APT is to blend in and look like normal traffic.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597499491000036

When Who Tells the Best Story Wins

Paulo Shakarian, ... Andrew Ruef, in Introduction to Cyber-Warfare, 2013

IO and Cyber Warfare in the 2008 Israel-Hamas War

In late December 2008, Israel commenced a new operation—“Cast Lead”—with the goal of stopping missile strikes in southern Israel that originated from Gaza. The attack commenced with an air assault that took out 50 Hamas targets on the first day. Israel emplaced a carefully constructed information campaign that actually began simultaneously with the physical conflict. Two days after the initial airstrike, the IDF launched the YouTube channel called the “IDF Spokesperson's Unit.” This channel, the brainchild of some IDF soldiers, included a variety of footage of the IDF—everything from video logs (“vlogs”) of IDF personnel to gun video of precision strikes and the footage of humanitarian assistance missions.27 Additionally, the “Jewish Internet Defense Force” played a key role in encouraging the Jewish Diaspora to become active in the “new media” of the Internet. For instance, their Web site included instructions for using various types of social media—including Facebook, YouTube, Wikipedia, and various blogging services. Further, they also directed efforts against the “new media” of the opposing force as they also claimed to be responsible for shutting down several pro-Hamas YouTube channels.

Hamas and the inhabitants of Gaza responded to Israel's IO campaign with its own content documenting the devastation of the Israeli attack. Leveraging mobile phones, Twitter, digital images, and blogs, the Gazans were able to tell their story to the world.28 They responded to the attempts to shut down their YouTube channels with the creation of paltube.com—a site dedicated to Hamas videos.

Not only did Hamas and its supporters fight Israeli IO with their own information campaign, and conducted a series of hundreds of defacements of Israeli Web sites. Though some Web site defacements were high profile enough to gain the attention of mass media,29 the actual damage (likely economic) is presumed to have resulted from the sheer number of these actions carried out by pro-Hamas hackers. Typically, the pro-Hamas groups conducted some rudimentary vulnerability scanning of targeted Israeli Web sites, often with the Web server software. Upon obtaining access to parts of the server, the pro-Hamas hackers would deface the Web sites with anti-Israeli graffiti.30

Perhaps the most notable hacking group for these Web site defacements was known as “Team Hell.” One member, known as “Cold Zero,” was responsible for over 2000 defacements of Israeli Web sites, nearly 800 of which were carried out during the 2008 war. He allegedly conducted defacements of high-profile sites such as Israel's Likud Party and the Tel Aviv Maccabis basketball team.31 Upon his arrest in early January 2009, “Cold Zero” was found to be a 17-year-old Palestinian male Israeli-Arab who worked with accomplices in other Islamic countries.

In addition to Web site defacements, Hamas supporters also leveraged DDoS attacks on a small to medium scale. Pro-Hamas hacker, Nimu al-Iraq, who is thought to be a 22-year-old Iraqi Mohammed Sattar al-Shamari, modified the hacking DDoS-tool known as al-Durrah for use in the 2008 Gaza war. This software is similar to that the DDoS software used by the Russian hacktivists in the Georgian conflict (as described in Chapter 3): both allowed novice users to easily participate in DDoS attacks during the conflict without giving up control of their own computer. An al-Durrah user would enter the addresses of targeted Israeli servers into al-Durrah's interface, which he/she would obtain from a pro-Hamas hacker forum, and the software would proceed to flood the targeted server with requests eventually taking it offline.32

Israeli hacktivists also had DDoS tools of their own. A pro-Israeli group known as “Help Israel Win” created a tool called “Patriot” which was designed to attack pro-Hamas Web sites during the conflict. This software has been referred to as a “voluntary botnet” as the users of this software would then be connected to a command-and-control server, which uses the URL “defenderhosting.com” which would then direct the Patriot user's computer in attacks. Unlike al-Durrah, the tools used by the Russian hacktivists (see Chapter 3), or the low-orbit ion cannon (LOIC) of Anonymous (Chapter 6), Patriot is not configurable by the user—allowing defenderhosting.com to completely control the cyber attack actions of its volunteered host.33

As the 24-day conflict passed its initial days, the tide of the IO war shifted from Israel, who initially was telling the more dominant story, to Hamas. The pictures of devastation in Gaza spread through the news media like a virus. What led to this shift? The likely explanation is the fact that several months prior to the outbreak of the conflict Israel started limiting media access to Gaza. In doing so, they hoped to limit the images of collateral damage to infrastructure and civilian casualties that would undoubtedly be reported by Hamas and the Gazans. By limiting the output of such reports, the international community would be slower to call for a resolution to stop the hostilities—thereby giving Israel more time to accomplish its tactical objectives. In this regard, their plan worked—the IDF was generally successful in achieving its tactical goals (as opposed to the 2006 conflict with Hezbollah). However, the side effect was that all the reporting from within Gaza came from Hamas and the Gazans. As a result, the story told from within Gaza was one-sided. By not letting independent media in the area, the Israelis effectively denied the opportunity for a disinterested party to refute the claims of the Gazans.34 Though there were some successful Israeli hacking operations, such as the IDF's hack of the Hamas television station and attempts by Israeli supporters to hack pro-Palestinian Facebook accounts,35 the Israeli efforts in cyberspace were insufficient to stop Hamas from delivering an effective message to the world. Further, the presence of Arab news media reporters from Al Jazeera, who stayed in Gaza since before the IDF started to curb media access, ensured that the Gazans’ story was told to the entire (Arab) world.36

The Israel-Hamas war of 2008 illustrates the importance of social media in modern information operations during conflict and both sides’ attempts to integrate cyber operations to support them. However, unlike Hezbollah's use of IP address hijacking, which directly contributed to the success of their IO in 2006, neither Israel nor Hamas were able to make highly effective use of cyber tactics to support their respective public relations in 2008. The Israelis, despite DDoS attacks against a pro-Hamas Web site and the shutting down of pro-Hamas YouTube channels, was ultimately unsuccessful in stopping Gazans’ story from reaching the world. While the Hamas supporters may have successfully leveraged some IT knowledge, as in the case of setting up paltube.com, they did not seem to conduct successful, sophisticated cyber operations—as their cyber attacks appeared to be limited to Web site defacements and small-/medium-scale DDoS. Likely, this is due to a lack of technical expertise in their organization—something Hezbollah clearly had in 2006. This could potentially reflect a lack of prioritization on cyber within Hamas in 2008.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012407814700004X

Risk Evaluation and Mitigation Strategies

Evan Wheeler, in Security Risk Management, 2011

Security's Role in Decision Making

Traditionally, security practitioners have used Fear Uncertainty and Doubt (FUD) or the big hammer approach to compliance to force through their own initiatives and basically strong arm the business into implementing certain controls or changing an undesirable practice. Unfortunately, this approach has usually been pushed through without really analyzing the potential impacts to the organization or the probability of occurrence in their environment. What may be the latest threats making headlines may have no relevance to your business or may be far less severe than another weakness that is also not being addressed.

Back in Chapter 3, the risk evaluation stage of the risk management lifecycle was defined as the function of determining the proper steps to manage risk, whether it be to accept, mitigate, transfer, or avoid the risk. During this stage of the lifecycle, newly identified risks need to be compared with the existing list of risks for the organization and priorities determined based on an enterprise view of risks across the organization. This process needs to account for risks to information security, as well as other risk domains, such as financial liquidity, or brand and reputation. Demonstrating the understanding that resources are often pulled away from these other risk areas to address information security risks will add credibility to your program. On its own, an information security risk may seem critical and a no-brainer to throw resources at immediately, but taken into context with other enterprise risks that may threaten the stability or viability of the business, it might need to be de-prioritized. Remember, that risk management is not about a checklist, and it may not be appropriate to mitigate every risk. As a risk manager, your responsibility is to help management to make well-informed risk decisions in the best interest of the organization.

For some reason, there is a perception in some circles that an exception or mitigation plan is a failure. In reality, just the opposite is true. Well-documented, justified, and tracked mitigation plans and exceptions are the signs of a mature and functioning risk management program. That is not to say that an exception on its own has any intrinsic value. Exceptions need to include sound business justification, be reviewed and approved by an appropriate level of management, and include a plan to address the risk. It is the risk manager's job to filter out the risks that have no chance of occurring, will have a negligible impact on the organization or are already well mitigated, and will help senior management to focus on the actionable and imminent threats to the success of the business. In the end, you are making recommendations about how to best manage an acceptable level of risk; you then need to let the other leaders of the organization make the hard decisions about how to balance available resources.

As part of your obligation to escalate the most critical risks to senior management, it is the information security function's responsibility to educate the organization about the most likely and severe risks, without being perceived as an alarmist. As security professionals and risk managers, it is your responsibility to present the results of risk assessments with enough detail so that senior management can make an educated decision about how to manage the exposure. It is very important that you don't take this personally. Often, management will decide not to address a risk that you consider to be critical or just plain embarrassing for the organization. First, consider that there may be other priorities in the organization that present an even bigger risk to their bottom line; also consider the possibility that you need to find a different strategy for presenting the risk findings. For example, mapping a risk exposure to your organization's annual business objectives will immediately get more attention than referencing ethereal security implications or the dreaded “best practices” justification.

Risk Deep Dive

Weighing the Soft Costs

Think about your organization's public-facing Web site and how your senior management would react if someone was able to exploit a vulnerability that allowed them to deface some of the site for everyone to see. Continuing with this example, let's now weigh out the costs and benefits of implementing controls to prevent this Web site defacement. Ultimately, Web site defacement may not have direct consequences for the organization that are as costly as, for example, credit card data being stolen. However, even if there is no requirement to report the breach, no regulatory fines, and no payments to clients or contract violations, the reputational damage can still be devastating for the business. For example, one major financial services institution invested heavily in a Web-Application Firewall (WAF) for their public Web site that for the most part only had static pages, like directions to their office and general information about the company. If you know anything about Web application vulnerabilities, you know that the richest vulnerabilities exist in interactive Web sites, whereas static Web pages present far less of a risk. When questioned about why they were investing in such expensive technical controls, they said that they couldn't afford the perception of weakness. If their public Web site could be defaced, clients would lose faith in their ability to protect more sensitive systems. In addition, they believed that any sign of weakness would open the flood gates for attackers to begin trying to break into much more sensitive resources. For them, the potential impact to reputation justified the cost of the controls. This is a good example of an organization choosing to ignore not only the likelihood of occurrence but also the sensitivity of the actual resource being protected. Instead, they were focused on protecting their reputation, not the Web server itself, and for them, money was almost no object when it came to their public perception. They understood that the WAF would add only a trivial reduction in likelihood of a breach, but they wanted to demonstrate to senior management that everything possible was being done to prevent any public display of weakness.

Once the risk exposure has been calculated in the risk assessment step of the risk management lifecycle, risk evaluation is the next task. There are several options for addressing a risk:

Avoid – this option is probably the least frequently used approach; however, it is important to keep it in mind as an option. Avoidance basically involves ceasing the activity that is presenting the risk altogether (or never engaging in the activity at all). So, if it is a new business venture or maybe a technology deployment, avoidance would be abandoning those efforts entirely.

Accept – many risks may be unavoidable or just not worth mitigating for the organization, so in this case, management needs to make a formal decision to accept the risk. Many organizations choose to ignore certain risks, which is really just an implicit form of acceptance.

Mitigate – most commonly, mitigation of a risk or remediation of a vulnerability is associated with risk management; however, remember that this is just one option. To mitigate a risk really means to limit the exposure in some way. This could include reducing the likelihood of occurrence, decreasing the severity of the impact, or even reducing the sensitivity of the resource. Mitigation does not imply a complete elimination of risk, just a reduction to an acceptable level.

Transfer – this option is gaining in popularity as organizations start to really understand where the responsibilities for risks lie. The classic example of this approach is purchasing insurance to cover the expected consequences of a risk exposure. Data breach insurance is just starting to emerge as an option for organizations, the idea being that you transfer the risk to the insurance company. Risk can also be transferred through contracts with partners and clients or by pushing functions out to the customer.

As security professionals, it is in our nature to try to fix all the risks we identify, but we need to strongly consider all our options in each case. The best meetings happen when you can go in and the business folks start making assumptions about what you won't let them do or which controls “security” is going to make them implement. Just sit back and listen to them discuss all the additional controls and process or shoot down each others' ideas, which they perceive as not being secure. In many organizations, you will find that the business still doesn't really understand the intricacies of when certain controls are appropriate, so you may have an opportunity to be the hero who gets to say “no you really don't need all that at all, you can do it much more easily this way …” and wait for it to sink in. Finding the right controls for the level of risk is the key and that is where you should be providing the most value to the organization.

The term “trusted partner” may be overused at this point, but it does describe how you want to present yourself to the business. The security team needs to work with the business to compare the cost of the safeguard versus the actual value of the resource or impact of a breach. The threshold is clear: if the controls cost more to implement and maintain than the cost of the risk exposure, then the risk isn't worth mitigating. It sounds simple, but as we discussed in Chapter 6, calculating the impact of the risk exposure in terms of a dollar value isn't always so straightforward!

When presented with a risk, the security team, together with the resource owner and maybe even members of senior management, needs to negotiate a plan to reduce the risk or accept it. This can often be the longest step in the risk management workflow because budgets may already be set for the year, resources may be allocated to other projects, and other security risks may pull on the same resources. The bottom line is, there is going to be some negotiating. All of the constraints need to be balanced, and informed decisions need to be made. The security team will meet with each resource owner or Subject Matter Expert (SME) to discuss any outstanding risks and decide how to address them. A risk should be considered “addressed” from a tracking perspective when any of the following criteria are met:

Approval of an Exception Request (accept)

Approval of a Mitigation Plan (mitigation or transfer)

Elimination of the Vulnerability (remediation, a form of mitigation)

Activity Causing the Exposure is Ceased (avoid)

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597496155000086

Internet Information Services – Web Service Attacks

Rob Kraus, ... Naomi J. Alpern, in Seven Deadliest Microsoft Attacks, 2010

Dangers with IIS Attacks

IIS and Web servers are immediately exposed to a dangerous environment, simply because of the roles the servers are expected to fulfill. IIS is intended to serve Web-based content to both internal and external users who rely on Web services to interact with your organization. In cases where IIS is serving Web content to Internet-based users, it is immediately exposed to significantly more threats than if it were simply providing content on internal networks. Access to IIS servers via the Internet allows anyone navigating the Internet to connect to the servers and perform various activities; this not only includes legitimate users but also malicious attackers.

Tip

Administrators who have taken a close look at their organizations' IIS logs will be able to agree that both legitimate and malicious activities can be witnessed almost on a daily basis. In addition to viewing IIS logs, administrators should also consider tracking malicious activity by viewing firewall, IDS, and IPS logs on a regular basis.

So, what are some of the dangers of hosting Web content and making the content publicly available? Well, it really depends on the scope of the application, type of content being served, and the sensitivity of the content. Depending on the type of content presented, the impact from an attack against IIS can be significant or just a nuisance. The following examples provide insight into some different situations where attacks against IIS can have various levels of impact on your organization.

One popular attack scenario often chosen by attackers and “hacktivists” is Web site defacement. Web site defacements usually involve finding a flaw in the implementation of a Web application or Web server and leveraging the flaw to change Web site content to spread a targeted message. Some examples of previous defacements can be viewed by visiting the zone-h.I Web site and browsing through the archives. Zone-h archives and tracks information about the defacements so the public can view the results of a successful defacement even after the Web site has been restored back to its original state. After viewing several of the recent defacements, you will probably notice some attacks are simply annoying and equivalent to graffiti; however, other examples will display a message crafted by the attacker to make a statement with the goal of promoting his or her political or other agenda.

Note

A hacktivist-launched attack is usually the work of an individual or a group trying to convey a message and influence people and organizations by using hacking techniques to spread their message. Many of the hacktivist activities of past years have spread messages against nuclear war, power, and political repression and recently have questioned the validity behind research data about global warming.J

Although a defacement attack may appear to be annoying, it can really cause a lot of damages to your organization's reputation if executed properly. In cases where online shopping sites are compromised, it may significantly impact the business that is generated from your site as online customers may lose confidence in how well your organization is focused on securing private customer information.

DoS attacks against IIS can also significantly impact customer confidence and cause prolonged service degradation or outages for legitimate users and customers. Several vulnerabilities exist that may affect IIS Web and FTP server components, allowing attackers to cause DoS conditions.

Attacks do not have to deny service or deface Web sites to be effective. In certain situations, an attacker may decide to compromise an IIS server with the sole purpose of gaining a foothold within the network and then conducting further attacks against internal resources. Once inside your network, an attacker may be able to launch additional attacks from the compromised systems and attempt to gain access to other targets within the Demilitarized Zone (DMZ) or other network segments. We will discuss this type of attack and defenses against it in the section “Defenses against IIS attacks” of this chapter.

Scenario 1: Dangerous HTTP Methods

One of the concerns when dealing with Web servers is learning how the server is configured and what types of interaction are allowed for unauthenticated visitors to applications running on the Web server. Some of these interactions come in the form of HTTP methods as defined in RFC 1945K – – HTTP/1.0 and RFC 2616L – – HTTP/1.1. HTTP has many methods that can allow various types of interaction between Web clients and Web servers. A brief review of some of the different methods available per the RFCs is provided in Table 6.2.

Table 6.2. IIS versions and platforms

HTTP methods
GET The GET method is used when making requests for resources on a Web server. This is the type of request sent to a Web server when you click on a hyperlink to visit a Web site. It will return the header information and the body of the document requested
POST The POST method is often usedwhen users fill out forms and send data to a server. A common example of using POST is when users log into Web servers by providing credentials and clicking a submit button
OPTIONS The OPTIONS method requests information from the server about what methods may be available for a requested resource
PUT The PUT method allows a user agent to place a new content or update an existing content to a specified location. The PUT method can overwrite or create new resources on the server if enabled
DELETE The DELETE method will remove the content specified within the request if the method is enabled on the server
HEAD The HEAD method is almost identical to the GET method; the key difference is the response will only include the metadata for a requested resource
TRACE The TRACE method is often used for diagnostics, testing, and debugging

Now that you have an understanding or refresher of the basics of HTTP methods, let's explore our first scenario. In this scenario, our attacker “Mike” is working on some projects for work and decides it's time to take a short break. During the day, Mike is a programmer for a company that creates complex network scanning tools but at heart, he just likes to break into networks for fun. He hopes to someday be one of the cool “penetration testers” he always hears about.

During his breaks, Mike likes to explore the Internet and enjoys finding flaws in Web site and server deployments. While he is on his break, he decides to fire up his MacBook ProM and starts looking for targets of opportunity to continue with some research he has been doing on Web server security. At a loss for ideas about whom to experiment on, he decides to poke around the “Brandon's Discount Coding Books” Web site from where he had recently purchased his latest C++ programming book. After a few minutes of reviewing the structure of the Web site, he decides to run a few tools against the Web site and notices that one of the tools indicated the HTTP PUT method is enabled on the Web server. Mike knows this is something that can be very dangerous and that attackers can sometimes use the HTTP PUT method to upload files to the Web server.

In just a matter of minutes, Mike recalls reading that it is possible to upload files with the capability of executing commands on the underlying server. Since the Web server is using Active Server PagesN (ASP) for delivering content, he can use his knowledge of HTTP PUT and some specially crafted ASP pages to interact with the server. After a few more minutes of searching on the Internet, Mike finds an ASP page he can upload to interact with the server. Mike then transfers the file named cmd.asp to the server using the HTTP PUT method. Mike then opens up his Web browser and connects to the Web site and the ASP page he had just uploaded a few minutes earlier.

The ASP page uploaded is capable of interacting with the server's local cmd.exe application found on Windows operating systems. The page will allow Mike to interact not only with the Web site but also with the underlying operating system. Mike decides to attempt adding a new user to the operating system by using the net user command. If the Web server is running under the context of a privileged user allowed to create new accounts on the system, then the account should be created. Figure 6.1 illustrates Mike entering the command in the text box of the ASP page he had uploaded earlier to create a new user.

Which of the following is an example of a Trojan that can be used for Website Defacement

FIGURE 6.1. Add User from Web

After Mike has run the command, he decides to see if the command actually worked and uses the net user command again to list all of the accounts currently configured on the system. The output from the net user command can be viewed once again on the ASP page that Mike had uploaded earlier by referring to Figure 6.2. As you can see, it appears that Mike has the appropriate permissions to interact with the system.

Which of the following is an example of a Trojan that can be used for Website Defacement

FIGURE 6.2. List Users

Next, Mike decides that he wants to learn a little bit more about the internal network connected to the Web server and uses the route print command to display a list of configured routes and other important network configuration information. The output for this command is seen in Figure 6.3.

Which of the following is an example of a Trojan that can be used for Website Defacement

FIGURE 6.3. Print Routes

“Where to now?” you ask. Well, the sky is the limit depending on the type of access you currently have and the other protocols or interfaces available on the target system. It is fairly obvious that this attack can have a real negative impact on the security of the Brandon's Discount Coding Books online retail Web site. With the right conditions in place, this entire attack took under 5 minutes to perform. Is your Web server configured correctly?

Scenario 2: FTP Anonymous Access

FTP is a service that has been around for a very long time and many papers have been published on how to properly secure the service. It is used by many organizations as a convenient way of transferring large amounts of data from one location to another. A few examples of data usually transferred include Web content, store application updates, store backups from remote systems, and transaction logs. Many times administrators do a fairly good job at locking down FTP servers to only allow access to authorized users; however, penetration testers still find misconfigured FTP servers on a regular basis.

In this scenario, the attacker “James” is looking for a place to store the latest release of his favorite Massive Multiplayer Online Role-Playing Game (MMORPG), “World of Hackercraft.” This game has been very popular in the MMORPG gaming community for many years and being a true fan, it would be a shame for James not to share the newest release with his closest friends. Since many of his friends are located in various countries around the world, he decides it would be best to upload a copy of the software to a FTP server so they can access it anytime they wish.

Harnessing his knowledge of FTP and the power of the Internet, James first begins to scan blocks of IP addresses in an attempt to identify FTP servers capable of storing the game files. Specifically, James is attempting to identify FTP servers allowing anonymous access with write permissions to the FTP server. Fortunately for James, this does not take long as he was able to find a Voice over IP (VoIP) server with FTP and anonymous writable access enabled. Figure 6.4 illustrates the use of Metasploit to locate FTP servers with anonymous access enabled.

Which of the following is an example of a Trojan that can be used for Website Defacement

FIGURE 6.4. Metasploit FTP Scan

Once the server is located, he uploads a copy of the game to a directory that he had created on the FTP server. Figure 6.5 illustrates the attacker connecting to the FTP server, creating a directory, and uploading the game for his friends to later download. The software is now ready to be downloaded, so James sends an e-mail to his friends with the IP address of the server and the name of the directory in which the software is stored. James' friends are now able to connect to the FTP server and to the directory to which the software was uploaded and they begin to download the software for later use. James looks forward to meeting his friends in the game and fires up his game client to start exploring the strange new worlds found in the latest release.

Which of the following is an example of a Trojan that can be used for Website Defacement

FIGURE 6.5. FTP Upload

How is this attack possible? In this scenario, the attacker simply identified a common misconfiguration in the IIS FTP server and used it to his advantage. Anonymous access for FTP is dangerous enough by itself purely because many times sensitive data is left on the server and anyone who finds the server may be able to read the data. Increase the severity of the vulnerability by allowing write access to the server and it will not be long before someone takes advantage of it. As a matter of fact, now that James' friends know the IP address of the writable FTP server they may start uploading more games, cracked software, and other files whenever they like. Implementing proper authentication and authorization in addition to logging can help mitigate this type of risk. In addition, implementing Disk QuotasO for FTP is also a good idea and can help prevent abuse of the disk space available should an attacker gain access to a legitimate FTP user accounts.

Scenario 3: Directory Browsing

When a Web server is hosting Web content, it has several ways that it can handle the data stored in its directories. In many cases, if a default page named as index.html or other is available then the server will render the page displaying something for the user accessing the Web site to look at. If the server is configured correctly, it will display an error indicating directory browsing is not allowed or enabled if a default page is not available. However, if the server is configured to allow directory browsing it will display the contents of directory with hyperlinks that can be clicked, allowing navigation through the directory structure of the Web site.

For many years, Apache Web ServerP has enabled directory browsing for the /icons/ and /icons/small/ directories by default. Although the directory only contains icons, this can be problematic in cases where administrators may inadvertently add sensitive data to the directory that would expose it to anyone who may visit the site. Although this chapter focuses on IIS and IIS attacks, this Apache example was too good to pass up. An example of directory browsing can be viewed on the Apache Web site located at http://httpd.apache.org/icons/.

In this scenario, the attackers, “Chris” and “JR” are learning about how directory browsing can allow attackers to gain access to sensitive information on IIS Web servers deployed with directory browsing enabled. The information that can be viewed may not be intended for unauthenticated or unauthorized individuals and may provide information that can be used in future attacks. To experiment with learning about what type of information may be visible from directory browsing, Chris and JR decided to browse the Internet and see if they can identify sites having directory browsing enabled. After clicking through random Web sites for approximately 30 minutes, Chris and JR come to the conclusion that there must be a better way to search for misconfigured sites and do a little research.

Chris quickly learns that by using search terms including words that are commonly found on directory browsing pages, he can find many sites with directory browsing enabled. One example is using search terms such as “/scripts” and “to parent directory.” Upon reviewing the results of their search query, Chris and JR quickly realize they are on to something big. After clicking on one of the search results, they are now able to view the directory listed in Figure 6.6.

Which of the following is an example of a Trojan that can be used for Website Defacement

FIGURE 6.6. Directory Browsing

This directory contains a few files that are immediately appealing to JR as he knows that files with a .sql extension usually means it is an SQL script used to set up, maintain, or modify data stored on an SQL server. JR decides to download the config.sql file and view the contents to determine if any sensitive information is contained within it.

It appears Chris and JR hit the jackpot! Within the config.sql file, there are multiple SQL statements used to configure a database from scratch, and multiple user accounts and initial passwords are found in SQL statements used to populate the initial users database table. Now Chris and JR can use this information to attempt to authenticate to the Web application itself and possibly gain access to administrative functions that are used to configure the Web site. If database ports are available, the attackers may also be able to directly connect to the database and run SQL queries to mine data directly from the database.

This scenario provided you with a quick overview of why and how directory browsing attacks can allow attackers to gain access to your sensitive information. Ensuring that Web servers are not configured to allow directory browsing can help prevent attack such as these form becoming a reality.

Epic Fail

It is 2:00 a.m. and a penetration tester is working on a penetration test for a client. The tester discovers directory browsing is enabled on an IIS 5.0 Web server used to provide access to business partners and is also used to store internal records that have been scanned for archiving. After discovering that the Web server has directory browsing enabled, the penetration tester decides to use the DirBusterQ tool from the Open Web Application Security ProjectR (OWASP) to identify possible hidden directories.

After running the tool, the penetration tester has identified a directory named “checks” as part of the results. The penetration tester investigates further to find that the directory has browsing enabled and it contains scanned copies of accounts receivable checks for the last 3 years. The analyst quickly contacts the client and informs him of the situation.

Unfortunately, this is a true story and you may imagine the surprise of the client when they realized their customer's sensitive data has been exposed in such a manner that anyone can access it. Sadly enough, no records are stored for dealing with configuration management and it is near impossible to determine how long the data has been exposed.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597495516000066

Threat

Bill Gardner, in Building an Information Security Awareness Program, 2014

Hacktivism

Hacktivists are motivated by political causes [4]. The most widely known hacktivist group is Anonymous and its affiliated groups [5]. Hacktivism is defined as “the nonviolent use of illegal or legally ambiguous digital tools in pursuit of political ends. These tools include web site defacements, redirects, denial-of-service attacks, information theft…” [6].

There are many different examples of hacktivism, but the largest, most successful, and most well known was Operation Sony. Also known as Op Sony, the operation Anonymous calls their cause de jour was the case of George Hotz who is also known as the first hacker to “jailbreak” the iPhone. George, known online by his handle GeoHot, also wanted to “jailbreak” his PlayStation 3, which would allow users the ability to play and share homemade games. On December 29th, 2010, George Hotz and the rest of hacker collective known as fail0verflow announced they had retrieved the root key of Sony's PlayStation 3 gaming console at the 29th Chaos Communications Congress. On January 3rd, 2011, George Hotz published his findings on his website, geohot.com. On January 11th, 2011, Sony filed a lawsuit against George Hotz and other members of fail0verflow for releasing the PlayStation 3's root key [7].

In April 2011, Anonymous fired the first salvo in what came to be known as Op Sony, by taking the PlayStation Network (PSN) and several PlayStation-related domains, including the PlayStation Store, offline [8]. It was later learned that the attacks not only resulted in an outage of the PSN service but also turned out to be one of the largest data breaches in history involving over 70 million records including personally identifiable information (PII) and credit card information [9].

This period of time also saw the rise of a subgroup of Anonymous known as LulzSec. This brash subgroup of Anonymous ultimately took credit for stealing 24.6 million records in the PlayStation Network [10]. The group then went on an extensive hacking spree that involved a number of high-profile targets from Fox.com to PBS and the game company Bethesda Game Studios while tweaking the noses and taunting law enforcement the entire time. The group saw themselves as modern-day Robin Hoods that were exposing the insecurities of the websites they breached. As their hacking spree continued, they continued to garner public attention and the attention of law enforcement though the summer of 2011. The group's activities became more brazen and outlandish [10]. By the beginning of the fall of 2011, the group began to unravel when it was reported that the group's leader Sabu, whose real name is Hector Xavier Monsegur, was arrested on June 7, 2011, and had turned FBI informant [11]. By the end of 2011, all the members of the LulzSec crew would be arrested and jailed. While the reign of the LulzSec crew had ended, the various groups known as Anonymous live on.

Anonymous got its start in 2003 on the Internet image site 4chan.org where each user posted as an Anonymous user. As the site evolved, many of the “Anonymous” users found that they had certain goals and political views in common. The message board on 4chan.org, mainly a message board called /b/, is for the posting of random information that contains calls to action. While Anonymous has been involved in a number of data breaches, they are mainly known for distributed denial-of-service (DDoS) attacks on government, religious, and corporate websites. Some of the high-profile targets of such attacks include the Westboro Baptist Church, Church of Scientology, PayPal, MasterCard, and Visa [12]. Many members of Anonymous learned of the Sony lawsuit against George Hotz on the site, and ongoing operations were often discussed and coordinated on 4chan, but Anonymous now shares operational details on Pastebin.com. Pastebin was developed as a site to share information for a certain period of time [13], but it's unclear that the developers ever dreamed that it would become the focal point of an ongoing Anonymous operation at it is today. Anonymous and Anonymous-associated hacking groups also use the site to dump personal information about their enemies, known in the Internet underworld as doxing, and share confidential information taken from data breaches such as e-mail, passwords, usernames, and password hashes.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124199675000028

Cyber Attacks by Nonstate Hacking Groups

Paulo Shakarian, ... Andrew Ruef, in Introduction to Cyber-Warfare, 2013

Summary

This chapter represents the attempt to outline what the hitherto most notorious hacker collective, Anonymous, might look like on a whiteboard. Publically available hints toward its structure had been followed and explored: the possible initial “directorate” recruited from an idealist veteran hacking group as well as the better known online seedbed, a mingling place for contempt and counterculture. Whatever the beginnings, Anonymous is driven by a set of motivations largely shared by those whose actions are most visible: freedom of information and the right to online privacy. There are a certain number of online arenas that are used to decide upon, plan, and organize a hack—mostly using SQL injection or a related, basic hacker skill that grants access to target systems. Personal, confidential, and otherwise compromising data are later dumped on one or more popular file-sharing sites mainly to give evidence of the intrusion. Mostly after capturing sensitive data, but not requiring this more or less clandestine step a DDoS attack is used to render the target Web site inaccessible. In some cases, Web site defacement prominently displayed the reason and goals of Anonymous’ interference. Over its history, the collective experienced dedicated functional spin offs (e.g., LulzSec) or ideological derivatives who sought to be more “pure” (e.g., MalSec) whose history and demise is antidote as well as a defining element of Anonymous’ structure. The political hacktivists also encountered like-minded collaborators (e.g., PLF), who probably helped to emphasize this facet of the collective. Finally, this chapter sought to display representative motivations, modi operandi, and tools through the brief description of select hacks. In the wake of Anonymous’ refocusing or just the skilled public relation work of a few Anons, the chapter concludes with examples for the nascent stage of products: a music portal, two dumping sites, and a highly controversial, quickly retracted operating system.

Anonymous might be more the history of hacking exploits than a social structure, but in the role of Sabu alone it contradicts its claims of being a legion of countless “everybodies.” Sabu organized, motivated, and lead many of the highly popularized hacks in 2011. His absence after the March 2012 arrests was noticeable and it forced LulzSec to reinvent itself. The global wave of arrests in 2011 and especially those in early 2012305 are perceived by some as crippling for the Anonymous collective,306 which can only be the case for an organization that is hierarchically structured. Detentions can have effects on the activity-level of a group only, if the arrested individuals were crucial in the organization of the activities. Whether there is or was a handful of people conceiving and directing the politically aware and active Anonymous, the collective appears to be much more than a loosely knit organization now. The collective action presented in its every hack represents an enormous challenge not only to the reader who seeks to wrap her head around this virtual phenomenon but also to the social scientist. Initially self-proclaimed members of the collective without any real-world connection to fellow hacktivists, political Anons may form real local groups with regular meetings and faces, real names. In the virtual meeting spaces (user), name recognition still applies and allows for virtual groups and Anonymous-spin offs to be formed. But the numerical majority of the collective remains elusive with many different levels of possible engagement ranging from sympathizers to participants in DDoS attacks. The world-spanning virtual social network, which conceives, decides, plans, and organizes hacking exploits is what and who Anonymous really is: a number of IRC-channels, blogs and message boards accessed from (at least) several hundred thousand devices all around the globe. 4chan might have been its cradle, but so far it seems Anonymous has risen beyond this tactless, seedy playground with its bored opportunists, tricksters, and hustlers.

So far, it seems the political activists have by and large conquered the movement, though it cannot completely abandon its trickster nature. The Janus-faced character of the collective is reflected in its every aspect from activity, to the motivation and the understanding of itself is due to the myriad of individuals who have used the Anonymous platform for very different reasons. In a handful of interviews some self-proclaimed Anons try to fixate an image for the collective, but due to the elusiveness of its membership it will be difficult to instill and maintain. The political hactivists the Anonymous collective depend on its favorable depiction in the media since the nature of its preferred modus operandi, the employment of the low-orbit ion cannon in DDoS attacks, hinges on a large number of volunteers. The advance in new alleyways with the launches of the Anonymous operating system, the music sharing Web site as well as the data dump sites, albeit with different levels of success, help diversify the collective, but are also evidence of the lack of guidelines for members, that could serve as identity markers.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124078147000063

Attack Detection and Defense

Brad Woodberg, ... Ralph Bonnell, in Configuring Juniper Networks NetScreen & SSG Firewalls, 2007

Understanding the Anatomy of an Attack

There are almost as many ways to attack a network as there are hackers, but the majority of attack methods can be categorized as one of the following: manual attacks or automated attacks. Manual attacks are generally still performed by a piece of code or other script, but the attack itself is initiated at the request of a live user who selects his or her targets specifically. Automated attacks cover the kinds of attacks made by self-propagating worms and other viruses. There's also the question of the competence of an attacker or the complexity of an automated attack, which we'll discuss here as well.

The Three Phases of a Hack

Most hack attacks follow a series of phases:

1.

Reconnaissance Initial probing for vulnerable services. Can include direct action against the target, such as port scanning, OS (operating system) fingerprinting, and banner capturing, or it can be performing research about the target.

2.

Exploit An attempt to take control of a target by malicious means. This can include denying the service of the target to valid users. Generally, the ultimate goal is to achieve root, system, or administrator level access on the target.

3.

Consolidation Ensuring that control of the target is kept. This usually means destroying logs, disabling firewalls and antivirus software, and sometimes includes process hiding and other means of obfuscating the attacker's presence on the system. In some extreme cases, the attacker may even patch the target against the exploit he used to attack the box, ensuring that no one else exploits the target after him.

While each step may have more or less emphasis, depending on the attacker, most hack attacks follow this pattern of progression.

Script Kiddies

For manual attacks, the majority of events are generated by inexperienced malicious hackers, known both in the industry and the hacking underground as “Script Kiddies.” This derogatory reference implies both a lack of maturity (“just a kid”) as well as a lack of technical prowess (they use scripts or other pre-written code instead of writing their own). Despite these limiting factors, what they lack in quality, they more than make up for in quantity.

Under a hail of arrows, even the mightiest warrior may fall. These sorts of attacks will generally be obvious, obnoxious, and sudden, and will usually light up your firewall or IDP (Intrusion and Detection Prevention) like a Christmas tree.

The majority of these attacks have no true intelligence behind them, despite being launched by a real person. Generally, the reconnaissance phase of these sorts of attacks will be a “recon-in-force” of a SYN packet and immediately transition to phase two by banging on your front door like an insistent vacuum cleaner salesman. Script Kiddies (also “Skr1pt Kiddies,” “Newbies,” or just “Newbs/Noobs”) glean through security Web sites like Security Focus (www.securityfocus.com), Packet Storm Security (http://packetstormsecurity.nl), and other sites that provide proof of concept code for exploits for new scripts to try out. Once they have these scripts, they will blindly throw them against targets—very few of these amateurs understand exactly how these hacking tools work or how to change them to do something else. Many sites that provide code realize this, and will purposely break the script so that it doesn't work right, but the script will work correctly with a simple fix after a walkthrough of the code by an experienced security professional.

Unfortunately, that only stops the new, inexperienced, or unaffiliated hacker. More commonly, hacking groups or gangs form with a few knowledgeable members at its core, with new inept recruits joining continuously. The people themselves need not live near each other in real life, but rather meet online in Internet Relay Chat (IRC) rooms and other instant messaging forums. These virtual groups will amass war chests of scripts, code snippets, and shellcode that work, thanks to the work of more experienced members. Often, different hacking groups will start hacking wars, where each side attempts to outdo the other in either quantity or perceived difficulty of targets hacked in a single time span. Military targets in particular are seen as more difficult, when in fact the security of these sites is often well below corporate standards. Mass Web site defacements are the most common result from these intergroup hacking wars, with immature, lewd, or insulting content posted to the sites.

A bright side to this problem is that many times a successful breach by these amateurs is not exploited to its fullest, since many of these hackers have no clue as to exactly what sort of system they have gained access to, or how to proceed from there. To them, owning (a successful hack which results in a root, administrator, or system-level account) a box (a server), and modifying its presented Web page for others to see and acknowledge is generally sufficient. These sorts of attacks commonly do not proceed to phase three, consolidation.

From a protection standpoint, to defend against these sorts of attacks, it is important to keep DI and IDP signatures updated, and all systems patched, whether directly exposed to the Internet or not. Defense-in-depth is also key to ensuring that a successful breach does not spread. The motivation behind these groups is quick publicity, so expect hard, fast, obvious, but thorough strikes across your entire Internet-facing systems.

Black Hat Hackers

Experienced malicious hackers (sometimes called “Black Hat” hackers or just “Black Hats”) tend to be either a Script Kiddy graduating from the underground cyber-gangs, or a network security professional or other administrator turning to the “dark side”—or a combination of both. In fact, it is common to call law-abiding security professionals “White Hats,” with some morally challenged but generally good-intentioned people termed “Grey Hats.” The clear delineation here is intent: Black Hats are in it for malicious reasons, often those of profit. This hat color scheme gets its roots from old Western movies and early black and white Western TV shows. In these shows, the bad guys always wore black hats, and the good guys wore white hats. Roles and morality were clearly defined. In the real world, this distinction is far more muddled.

Black Hats will slowly and patiently troll through networks, looking for vulnerabilities. Generally, they will have done their homework very thoroughly and will have a good idea of the network layout and systems present before ever sending a single packet directly against your network—their phase one preparation is meticulous. A surprising amount of data can be gleaned from simple tools like the WhoIs database and Google or other Web search engines for free. Mail lists and newsgroups when data-mined for domains from a target can reveal many important details about what systems and servers are used simply by monitoring network and system admins as they ask questions about how to solve server problems or configure devices for their networks. A wealth of information can be gleaned this way regarding social engineering as well. Names, titles, phone numbers, and addresses—it's all there for use by a skilled impersonator, allowing them to then make a few phone calls and obtain domain information, usernames, and sometimes even passwords!

Are You Owned?

Social Engineering

Social engineering is the term used to describe the process by which hackers obtain technical information without using a computer directly to do so. Social engineering is essentially conning someone to provide you with useful information that they should not—whether it's something obviously important like usernames and passwords or something seemingly innocuous like the name of a network administrator or his phone number.

With a few simple pieces of valid information, some good voice acting and proper forethought, a hacker could convince you over the phone that he or she was a new security engineer, and that the CEO is in a huff and needs the password changed now because he can't get to his e-mail or someone's going to get fired. “And that new password is what now? He needs to know it so we can log in and check it…”

Be sure to train your staff, including receptionists who answer public queries, to safeguard information so as to keep it out of the hands of hackers. A good idea is to employ authentication mechanisms to prevent impersonation.

The recon portion of the attack for a cautious Black Hat may last weeks or even months—painstakingly piecing together a coherent map of your network. When the decision to move to phase two and actively attack is finally made, the attack is quiet, slight, and subtle. They will avoid causing a crash of any services if they can help it, and will move slowly through the network, trying to avoid IDPs and other traffic logging devices. Phase three, Consolidation, is also very common, and typically includes patching the system from further vulnerability, so some Script Kiddy doesn't come in behind them and ruin their carefully laid plans.

A Black Hat's motivation is usually a strong desire to access your data—credit cards, bank accounts, Social Security numbers, usernames, and passwords. Other times, it may be for petty revenge for perceived wrongs. Or they may want to figure out a way to divert your traffic to Web sites they control so they can dupe users into providing these critical pieces of information to them—a technique known as phishing (pronounced like fishing, but with a twist). Some phishing attacks merely copy your Web site to their own, and entice people to the site with a list of e-mails they may have lifted off your mail or database server. Sometimes malware authors will also compromise Web sites in a manner similar to a Script Kiddy Web defacement, but instead of modifying the content on the site, they merely add additional files to it. This allows them to use the Web site itself as an infection vector for all who visit the site by adding a malicious JPEG file, Trojan horse binary, or other script into an otherwise innocuous Web site (even one protected by encryption such as Hypertext Transfer Protocol Secure, known simply as HTTPS).

Defense against these sorts of attacks requires good network security design as well as good security policy design and enforcement. Training employees, especially IT staff and receptionists or other public-facing employees, about social-engineering awareness and proper information control policy is paramount. For the network itself, proper isolation of critical databases and other stores of important data, combined with monitoring and logging systems that are unreachable from potentially compromised servers is key. Following up on suspicious activity is also important.

Worms, Viruses, and Other Automated Malware

Mentioned in the following “Notes from the Underground” sidebar, the concept of self-propagating programs is nothing new, but the practical application has only been around for the last 15 to 20 years. Given the Internet's origins stem from 40 years ago, this is significant. Indeed, it's only in the last two to three years that malware has taken a rather nasty turn for the worst, and there's a good reason behind it.

Early worms were merely proofs-of-concept, either a “See what I can do” or some sort of glimpse at a Cyber Pearl Harbor or Internet Armageddon, and rarely had any purposefully malicious payload. This didn't keep them from being major nuisances that cost companies millions of dollars year after year, however. But lately, some of the more advanced hacking groups started getting the idea that a large group of computers under a single organization's complete control might be a fun thing to have. And the concept of a zombie army was born.

Are You Owned?

Are You a Zombie?

The majority of machines compromised to make a zombie army are those of unprotected home users directly connected to the Internet through DSL lines or cable modems. A recent study showed that while 60 percent of home Internet users surveyed felt they were safe from hackers, only 33 percent of them had some sort of firewall. Of that minority of Internet users with firewalls, 72 percent were found to be misconfigured. This means less than 10 percent of home Internet users are properly protected from attack!

Furthermore, of the users who had wireless access in their homes, 38 percent of them used no encryption, and the other 62 percent who did, used wireless encryption schemes with known security flaws that could be exploited to obtain the decryption key. Essentially, every person surveyed who used wireless could be a point from which a hacker could attack—and over a third of them effortlessly.

Find out more information from the study online at www.staysafeonline.info/ews/safety_study_v04.pdf.

Zombies, sometimes referred to as Bots (a group of Bots is a Bot-net), are essentially Trojan horses left by a self-propagating worm. These nasty bits of code generally phone home to either an IRC channel or other listening post system and report their readiness to accept commands. Underground hacker groups will work hard to compromise as many machines as they can to build up the number of systems under their command. Bot-nets comprised of hundreds to tens of thousands of machines have been recorded. Usually, these groups use the bots to flood target servers with packets, causing a Denial-of-Service (DoS) attack from multiple points, and creating a Distributed Denial-of-Service (DDoS) attack. Nuking a person or site you didn't like is fun for these people. But today hackers are out for more than fun.

Once the reality of a multi-thousand node anonymous, controllable network was created, it was inevitable that economics would enter the picture, and so zombie armies were sold to the highest bidder—typically spammers and organized crime. Spammers use these bots to relay spam so ISPs (Internet service providers) can't track them back to the original spammer and shut down their connection. This has become so important to spammers that eventually they began contracting ethically challenged programmers to write worms for them with specific features such as mail relay and competitor Trojan horse removal. Agobot, MyDoom, and SoBig are examples of these kinds of worms. Organized crime realizes the simplicity of a cyber-shakedown and extorts high-value transaction networks such as online gambling sites for protection from DDoS attack by bot-nets under the mob's control.

Protection from these tenacious binaries requires defense-in-depth (security checkpoints at multiple points within your network) as well as a comprehensive defense solution (flood control, access control, and application layer inspection). Many of the Script Kiddy defense methods will also work against most worms since the target identification logic in these worms is generally limited—phase one recon is usually just a SYN to a potentially vulnerable port. This is because there is only so much space for what the worm needs to do—scanning, connecting, protocol negotiation, overflow method, shellcode, and propagation method, not to mention the backdoor Trojan. Most worms pick targets completely at random and try a variety of attacks against it, whether it's a valid target for the attack or not. To solve the complexity problem, many Trojans are now split into two or more parts: a small, simple propagating worm with a file transfer stub; and a second stage full-featured Trojan horse with phone home, e-mail spamming, and so on. The first stage attacks and infects, then loads the second stage for the heavy lifting. This allows for an effective phase three consolidation.

Information obtained by Honeypot Networks (systems designed to detect attacks) shows that the average life expectancy of a freshly installed Windows system without patches connected directly to the Internet and without a firewall or other protection is approximately 20 minutes. On some broadband or dial-up connections it can take 30 minutes or longer to download the correct patches to prevent compromise by these automated attack programs. Using the Internet unprotected is a race you can't win.

Are you Owned?

Multivector Malware

Hacking (the term as used by the media for unauthorized access) is as old as computer science itself. Early on, it consisted mostly of innocent pranks, or was done for learning and exploring. And while concepts for self-replicating programs were bantered around as early as 1949, the first practical viruses did not appear until the early 1980s.

These early malicious software (or malware) applications generally required a user's interaction to spread—a mouse button clicked, a file open, a disk inserted. By the late 1980s, however, fully automated self-replicating software, generally known as worms, were finally realized. These programs would detect, attack, infect, and restart all over again on the new victim without any human interaction. The earliest worms, such as the Morris Worm in 1988, had no purposeful malicious intent, but due to programming errors and other unconsidered circumstances, it still caused a lot of problems.

The earliest worms and hacking attacks targeted a single known vulnerability, generally on a single computing platform. Code Red is a classic example—it targeted only Microsoft Windows Web servers running Internet Information Server (IIS), and specifically a single flaw in the way IIS handled ISAPI (Internet Server Application Programming Interface) extensions. And while they did significant damage, a single flaw on a single machine tends to confine the attack to a defined area, with a known specific defense.

Unfortunately, this is no longer the case. Malware is now very complex, and the motivations for malware have changed with it. Early malware was limited to pranks like file deletion, Web defacement, CD tray openings, and so on. Later, when commerce came to the Web, and valuable data, like credit card numbers and other personal information were now online and potentially vulnerable, greed became a factor in why and how malware authors wrote their code. Recently, the culprits are spammers with significant financial clout, who pay programmers to add certain features to their malware so that spam (unsolicited email), spim (unsolicited instant messages), and spyware can be spread for fun and profit.

NetSky, MyDoom, and Agobot are the newest breeds of these super-worms. New versions come out almost weekly, and certainly after any new major vulnerability announcement. They don't target just one vulnerability on one platform—they are multi-vector, self-propagating infectors, and they'll stop at nothing to infiltrate your network. Most exploit at least four different vulnerabilities, as well as brute force login algorithms. These worms even attack each other—NetSky and MyDoom both remove other Trojan horses as well as antivirus and other security programs. A variant of Agobot attempts to overflow the FTP (File Transfer Protocol) server left behind by a Sasser worm infection as an infection vector.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597491181500125

Proactive Security and Reputational Ranking

Eric Cole, in Advanced Persistent Threat, 2013

Changing How You Think About Security

We have been hinting about it for a while but the bottom line is that the APT and next generation of threats require that we completely re-think how we do security. What has worked in the past will not work against an adversary that has changed the rules. This is evident based on the fact that organizations keep spending more money on security and are still getting broken into. Essentially many organizations keep doing more of the wrong things which will just increase an organization’s frustration, not actually solve any problems. The concern we have with many clients is that if the executives keep increasing the security budget but they do not see a measurable reduction in the APT incidents, this could impact the ability of security to be an effective business enabler across the enterprise.

Some problems require small adjustments and some problems require large adjustments. The APT and the new way the adversary is operating requires large adjustments to how we implement and roll out security across an organization. The  main reason is that with the APT, the adversary studied how organizations implement security and have found fundamental weaknesses in how the defenses work. The only way to start winning is by fixing the weaknesses to make it harder for the attacker. This is analogous to how someone might rob a bank. If the location of the safe is not very secure and the safe has a weakness that would allow it to be opened by someone without the combination this is a concern. However, if the main threat is someone walking into the front of the bank, pulling a gun on a teller, and stealing the money, the fundamental architecture of the bank does not have to change. By adding in more video cameras, armed guards and alarms could mitigate the threat to an acceptable level either by deterring or controlling the amount of damage a robber could cause. This is equivalent to the threats of the past that involved web site defacement, large scale worms and, denial of service attacks. By adding in more of the traditional security that is already in place would allow an organization to more effectively react and deal with the problem. Adding more or better traditional defenses worked very well against standard attackers.

Today the threat has shifted from a low-tech bank robber that walks in the front door with a gun and performed minimal planning, to an organized criminal element that would spend months planning the attack with very well-trained people. Continuing with our bank example, if a criminal element obtained the blueprints for the bank and found out the make/model of the safe and it turned out that there were fundamental flaws which were successfully exploited, this would cause major concern for the bank. After this attack, even if they installed more video cameras and better alarm system it would not help if there was a fundamental flaw in the design. The only solution would be to re-design the bank, implementing an architecture that was more robust to withstand these advanced attacks.

Today, we are in a similar situation. Organizations built a robust network (typically in the 1990s) based on necessity and over the years have enhanced it based on functional requirements. Security was added on as needed basis, but not part of the original design. As new threats evolved additional security devices were added to the existing network. If you are adding something to an existing entity, you are limited in what you could do. Adding a traditional wired alarm system to a house that is completely built is difficult to do based on how costly it is. Adding an alarm system integrated into the house when it is being built is not only more cost effective, but also more scalable. What is important to remember in these discussions is we are not stating that the current technologies that many organizations are using are useless against the APT. We are stating that the current technologies added on to an existing network that was not designed correctly are not as effective as they could be against the APT. Many of these technologies if they were configured differently and had the ability to be more integrated into the existing infrastructure could play a much larger role in dealing with the APT.

Whether we like it or not the attackers have obtained the blueprints to our networks and they are exploiting weakness in our design. The reason why adding security devices into our existing networks were effective against the traditional threat is because the traditional threat took advantage of configuration issues. Things like weak passwords, unpatched systems, misconfigured servers, and extraneous services are all configuration problems that the attacker exploited. By adding on additional devices that could filter, monitor, and/or control rogue packets or information would help protect our environments. The problem with the APT is it exploits a fundamentally different problem—it goes after flaws in the design not the configuration. Now that the rules have changed, the approach we take with security has to be adapted and changed. What worked in the past will not work in the future. The good news is most organizations recognize that nothing lasts forever and most critical items have to be replaced every 12–15 years. Since many networks and data centers were initially designed in the mid 1990s this is a perfect opportunity, to re-design the network to have integrated security as opposed to adding security on after the fact.

In re-designing our security to fix the fundamental flaws in our architecture, we need to make sure that our security devices can see the information that is needed in order to make the right decisions. While inbound traffic is important, most organizations have that covered fairly well. Between firewalls, IDS, and IPS systems there is a lot of blocking and tackling being done on the inbound traffic. The problem is the inbound traffic that is being used by the APT is normal looking traffic. The threat has done a really good job of understanding how each of these devices work, what specifically they look for, and making sure the attackers traffic is allowed through and does not fit any of the indicators that are being tracked. While rules get updated and signatures can be modified, the traditional installation of many of the common security devices is going to be somewhat predictable in terms of how they work. If you give someone who has advanced skills a somewhat static problem and ask them to find a way around it, the only question is how long is it going to take, not if it is possible. Once again it is critical not to misread any of this information or think that we are implying that these measures are not effective and should be removed off the network. They are critical components for success, but clearly not enough to deal with the APT. It is also important to note that this is not a criticism since the technology was never built to deal with this problem. This is like saying an NFL quarterback is not a good basketball player. That is not a criticism because that is not the skill set that they have trained and have expertise in.

Most devices were not built to deal with preventing the APT. While it is still important to look at inbound traffic, the real value in the inbound traffic comes on the correlation side. The firewall might not be able to block the traffic but if it is correlated with all of the other perspectives that come from the different devices that are on a network, it is usually much easier to see the needle in the haystack. The trick in finding a needle in the haystack is to reduce the amount of hay and/or make the needle bigger. By performing event correlation allows the size of the needle to grow, making it easier to find unusual or strange patterns on the network.

The key area to look for is what is leaving an organization. Many entities have traditional not invested significant time and many vendors have not built products for tracking and watching outbound traffic. The main reason is they have relied on proper blocking as the main level of protection. Now that prevention will not be 100% against the APT, timely detection is no longer a nice to have but a requirement. It must be done. While looking at the outbound traffic is important, the other really big paradigm shift is in what we are looking at. Many security devices focus in on examining the payload looking for anything strange or suspicious. The problem is that a tool of the APT for being stealthy on a network is encryption. By encrypting the payload they have changed the game in such a way that most of the security that you rely on is no longer effective. Instead of looking at payload we have to look at the properties of the packet and the relationship to other packets. Length of the connection, size of the packet, and general content (plaintext vs. encryption) are really key indicators of normal vs. malicious traffic. The interesting part is what we need to look at and examine is not that difficult but the fact that it is straightforward is irrelevant if organizations are not looking at it. The big paradigm shift that has to occur is from examining inbound payloads to outbound packets.

The good news is that organizations are capturing the APT, they are just not doing it quick enough. If the APT was 100% stealthy and was never caught, we would have no idea that organizations were even being compromised and there would be nothing to talk about. The reason why there is so much focus on the APT, is because the attacks are becoming public and the damage is significant. One of the main reasons why the damage is so significant is that we are taking too long to detect the attacks. Finding out that you are compromised is important but finding out you are compromised 6 months after an attack and after all an organization’s information has been stolen is unacceptable. Organizations need to continue doing what they are doing but quicker, faster, and better.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597499491000103

What is defacement Trojan?

Defacement Trojans It allows you to view and edit almost any aspect of a compiled Windows program, from the menus to the dialog boxes to the icons and beyond. They apply User-styled Custom Application (UCA) to deface Windows application. Example of calc.exe Defaced is shown here.

What is website defacement attack?

Web defacement is an attack in which malicious parties penetrate a website and replace content on the site with their own messages.

How do hackers deface a website?

Defacement techniques To hack a website and change its content, cybercriminals can: Brute-force the credentials of the site administrator; Exploit vulnerabilities in site components; for example, performing an SQL injection (SQLi) or cross-site scripting (XSS); Infect the administrator's device with malware.

What does malicious defacement mean?

At a basic level, "malicious defacement" means "the unauthorized alteration of existing content on your website." As hacker attacks go, malicious defacement may sound fairly harmless and, to be fair, it can be. It can, however, also be very dangerous.