Week Ten – Final Blog Post- Analysis

For the past several weeks, this blog has served as a means of fulfilling an academic requirement, and as a means of personal learning and growth. Primarily, I wrote about current events and topics that I found of interest that needed more researching or understanding. The majority of these posts were focused on security issues and their potential impact on the global threat environment. Several posts were more geared towards my own learning with little interaction with the academic discourse.

Primarily, the topics of discussion ranged from current event news feeds, and subsequent research, and topics presented in various discussion forums. A variety of sources made up my research and thoughts, a useful analysis as too many of the same source may corrupt the intent or output of the data such as one media outlet or one particular author. I attempted to utilize different resources that were legitimate for information input to prevent misinformation or misguided conclusions.

I feel that this type of blog is useful to an information security professional as it may allow the researcher more clarity in understanding various topics, current events, and providing a way for peer review that is less formal in design. I learned that in the cyber security space, it is vital to find intrigue and curiosity to promote self-learning behaviors. Additionally, practice is essential to stay relevant and maintain focus over a period of time. While this may be the end of this academic assignment, I will likely utilize this blog to continue my own thoughts and focus on maintaining and continual learning.

Week Nine – System Hardening: Administrative Controls and Residual Risk

When cybersecurity professionals consider residual risk, oftentimes the immediate thought is directed towards physical or technical controls. While these areas often have the most risk to analyze and mitigate, administrative controls should not be dismissed as having little risk involved.

Arguably, administrative controls, as primarily dealing with the human element, may have some of the highest risk, certainly in the form of inherent risk but also in residual risk. This risk level is dependent on factors including the organizational structure, the maturity of the security environment, and the regulations surrounding specific industries. As administrative controls, when considered contextually, essentially touch every layer of an organization, the risk surface of these controls in particular is all-encompassing.

The development of administrative security controls often involves an understanding of the objectives of the organization, and the current regulations that must be used to address inherent risk. These regulations, often designed for a specific protection such as a consumer, add additional complexity that may actually introduce new risk when these pieces of legislation dictate a specific implementation. In these situations, when a specific implementation is required, risk can be introduced by removing the basic layer of obscurity between organizational structures. Admittedly, this can improve the overall industry, but failing to acknowledge new risk is tantamount to negligence.

The recent push by several cybersecurity firms to address issues with the DMCA, in particular section 1201, demonstrates how legislation, a form of administrative control, can introduce risk to an entire industry while attempting to provide a different benefit. Section 1201 outlines anti-circumvention mandates that protect copyright owners by preventing the use of descrambling, decryption, deactivation, impairment, bypass, or removal of a technological measure. This has proven troublesome as ethical security researchers are subjected to suppression of their research or release of information that may be damaging to the vendor’s reputation, but also may allow vulnerabilities to continue to exist through the simple act of putting protections on their software. Violators of this rule, who may be in good faith attempting to prevent security breaches in organizations that utilize a vulnerable piece of software, could be subjected to fines and jail. This increase in risk is difficult to determine and the likelihood or scope of flaws may be unknown.

Beyond injection of risk due to known governance, additional residual risk may remain in the processes that an organization conducts during a security breach. In addition to the containment, remediation, identification, and other steps taken, future legal recourse on the behalf of clients, vendors, or governmental agencies must be considered. Administrative controls that do not encompass the actions to be taken before, during, and after a breach, including documentation, may allow residual risk to remain. Ineffective administrative controls on chain of custody, or general gossip may allow risk as future lawsuits can be adversely affected and employees not directly related to the breach may be deposed if information relating to the breach is spread or released. This additional residual risk is difficult to quantify as the ramifications in this type of situation could have a severe financial impact on the organization or lead to jailtime for employees of the organization.

Further, administrative controls outside the scope of the organization may play a role in residual risk as evidenced by the arrest of two Coalfire security penetration testers who, while fully documented, planned, researched, and approved, found themselves in jail while testing the security of a courthouse. Despite promptly complying with the officers who arrived, providing paperwork and contracts which showed their lawful entry, these two testers were put in jail with $100,000 dollar bonds. While they were released after a period of time, this bears evidence of failures in administrative controls by the arresting officer’s department. While detainment to verify information is expected, allowing these testers to be subjected to jail for over 24 hours is a form of unknown and therefore, residual risk.

There will always be inherent and residual risk in administrative controls, as the primary target, the human element, has the largest impact on the design, implementation, and execution of these forms of controls. The utilization of insurance in this aspect is one way to mitigate this form of risk, but acknowledging and analyzing the risks associated with administrative controls, inherent, residual, and injected, allows for increased system hardening for any organization. While providing an exact value of the residual risk associated with administrative controls is difficult, if not impossible, I would best describe a current view formula as (The Human Element)X or the human element exponentially dependent on the number of interactions with other humans.

Reference:

Dosal, E. (1 October 2019). ‘What are Administrative Controls.’ CompuQuip Cybersecurity. https://www.compuquip.com/blog/what-are-administrative-security-controls

MIT. (n.d.). ‘Security Controls.’ MIT. https://web.mit.edu/rhel-doc/4/RH-DOCS/rhel-sg-en-4/s1-sgs-ov-controls.html

Osbourne, C. (3 February 2020). ‘Charges Dropped Against Coalfire Security Team Who Broke Into Courthouse During Pen Test.’ ZDNET. https://www.zdnet.com/article/charges-dropped-against-penetration-testers-who-broke-into-courthouse/

Osbourne, C. (5 August 2021). ‘Black Hat: How Cybersecurity Incidents Become Legal Minefields.’ ZDNET. https://www.zdnet.com/article/black-hat-how-cybersecurity-can-be-a-legal-minefield-for-lawyers/

Osbourne, C. (24 June 2021). ‘Cybersecurity Firms Battle DMCA Rules Over Good-Faith Research.’ ZDNET. https://www.zdnet.com/article/cybersecurity-firms-battle-dmca-rules-over-good-faith-research/

Week Eight – Uncommon Languages as a Security Bypass

Typical organizational security measures include signature, or even context recognition to identify malware inside the network. Attackers, forever creative, have begun to utilize less common languages to either write their malware, or for use as a file dropper to write the malware to the disk or into memory.

There are several reasons why an attacker may wish to use uncommon languages. As these languages develop, they become cross-platform compliant allowing attackers the ability to utilize one piece of malware across many different systems without needing to re-write for each type. Researchers for BlackBerry have identified Go, Rust, DLang, and Nim, as the current most commonly used uncommon languages. As these are uncommon, signatures may be lacking in organizations to identify the code, and in some cases the binaries are complex and much harder for security professionals to analyze. Additionally, these languages, Rust especially, use less memory and have lower requirements for execution which is often used to target Internet of Things devices which may have lower end components.

Last year’s FireEye breach which resulted in the loss of the organization’s red team tools, has shown that many of these tools were written in these uncommon languages including Rust and Go. As current system configurations may not recognize these languages or the code as a threat, wrapping traditional malware in the form of an encrypted file, inside code written in these languages may allow attackers to bypass traditional security measures. Also of importance to note is that security professionals may not be familiar with these languages which means there is a distinct learning curve that attackers can take advantage of for infiltration of systems.

Training becomes important for security professionals in new technology and an effective security team should have up-skill training as a percentage of work and as a baseline component of the job. An organization who fails to advance the knowledge of their employees will surely be left behind or become the next big headline of a security breach.

Reference: Sheridan, K. (26 July 2021). ‘Attackers Use of Uncommon Programming Languages Continues to Grow.’ Dark Reading. Attackers’ Use of Uncommon Programming Languages Continues to Grow (darkreading.com)

Week Seven – Cyber Security Regulation

Regulations are often a love-hate relationship in most industries. However they have been shown to have a positive impact in new or changing markets, especially those that fail to improve at a pace necessary for protection of growth of the industry. Cybersecurity is no exception.

On July 20th, the Transportation Security Administration with the help of Homeland Security issued new a regulation for owners and operators of pipelines in the U.S. This has been a needed change as this new regulation enforces changes to the security structure and operations of these critical infrastructure pieces which had fallen dramatically behind in protections from cyber attack.

This regulation, spurred in part by the recent ransomware attack on Colonial Pipeline, requires that these operators and owners:

• Develop and implement a “contingency and recovery” plan for cyber intrusions

• Compare the plan with DHS standards, identify gaps, develop measures to fill them, and gain approval for them from the Cybersecurity and Infrastructure Security Agency, or CISA

• Appoint and identify, within seven days, a cyber coordinator (and a backup cyber coordinator) who is available to the DHS’s CISA officials “24/7”

• Report all cyber intrusions to CISA within 12 hours of the incident. (Kaplan, 2021)

While this may only affect this part of the U.S. critical infrastructure, the adoption of these requirements may expand to other industries that are deemed critical such as electric, water, sewer, railroads, and other organizations. Overall, if properly adopted, implemented, and regulated, this legislation can improve the security posture of the entire country for this vital infrastructure. In addition, this industry will need trained and educated cyber security professionals to meet these new requirements, a certain boon for a growing field.

Reference: Kaplan, F. ‘The U.S. Takes an Important Cybersecurity Step – Two Decades Late’. 23 July 2021. Slate.com. The Department of Homeland Security’s new pipeline cybersecurity requirements are long overdue. (slate.com)

Week Six – The Importance of System Relevance

As humans we naturally age, and as expected so do information systems, software, techniques and their associated networks. This system aging can become a problem for organizations who neglect to dedicate enough resources to advancing their systems and keeping them up to date. This is even more important due to the rapid pace in which new technologies and devices are implemented into the network. A sometimes overlooked and ever increasing risk to an organization is the need to address end-of-life (EOL) systems and software. When these systems and software hit the designated EOL, further patching and updating no longer happens and as a result any remaining vulnerabilities, or newly discovered weaknesses will remain in the system.

Organizations that utilize these EOL systems and software, such as outdated operating systems (OS) are effectively accepting the high risks associated with these systems. In some cases, this could be considered negligent behavior as it can expose confidential information easily during a breach due to known vulnerabilities. Some developers will issue notices about known new exploits using some forms of their previous software or hardware as a courtesy. This is evidenced by the recent SonicWall announcement that an EOL software and firmware system Secure Mobile Access (SMA) 100 and Secure Remote Access products running unpatched and EOL 8.x software is vulnerable to a ransomware attack. SonicWall is requesting that any clients using these products to unplug them from the network immediately and reset all passwords (Greig, 2021).

The risks associated with running EOL products in a production environment is tantamount to running through a mine field and hoping you don’t hit anything. EOL products generally carry known vulnerabilities and these systems are some of the low-hanging fruit that attackers will target. In some cases, even automated software that is freely available can be used to breach a system using old systems and technology. The costs of litigation, remediation, and revenue loss alone should scare any technology manager and business leader into ensuring that the necessary funds and processes are in place to keep systems and software up-to-date.

Moral of the story: Don’t run old software and networks, these will be a very easy attack vector for any number of attackers.

References:

Greig, J. ‘SonicWall Releases Urgent Notice about ‘imminent’ Ransomware Targeting Firmware’ 14 July 2021. Zd Net. SonicWall releases urgent notice about ‘imminent’ ransomware targeting firmware | ZDNet

ABC Services. ‘The Risks of End-Of- Life Technology.’ Accessed 18 July 2021. ABC Services. The Risks of End-of-Life Technology – ABC (abcservices.com)

Week Five – Trusted Platform Module

Microsoft has recently announced their push for TPMs to be mandatory with the next iteration of Windows 11. So what are TPMs?

A TPM, or trusted platform module, is an integrated chip or an add-on module for the system motherboard. This chip adds hardware-level security to the system. They can be used to encrypt disk drives and be used to stop dictionary attacks. Firmware attacks are on the rise and the use of TPMs may reduce those attacks.

In particular, Windows 11 is said to require a TPM 2.0 and explicit processors, which could mean that any processor before the 8th generation of Intel or AMD Ryzen 2000 will not work. While we are still fairly early in the development cycle, it shows promise that Microsoft is opting to push for higher security as many vulnerabilities are exploited due to their code.

How does a TPM work? By being part of the hardware, the chip accepts the I/O functions of the mother board and utilizes an onboard random number generator and secure program engine to encrypt contents as they flow through the system.

Figure 1

References:

Warren, T. ‘Why Windows 11 is forcing everyone to use TPM.’ 25 June 2021. The Verge. Why Windows 11 is forcing everyone to use TPM chips – The Verge

TCG. ‘Trusted Platform Module.’ Accessed 11 July 2021. Trusted Computing Group. Trusted Platform Module (TPM) Summary | Trusted Computing Group

Week Four – Remote Code Execution

Remote code execution is a method of attack whereby the attacker is able to send commands to the remote computer which then executes those commands. In most cases, these remote code executions are malicious in nature and are not authorized by the system owner.

Remote code execution is generally the second attack on a system as the first attack is to exploit a known vulnerability. This type of attack is among the most severe as an attacker who is able to execute code on a target system can often use this to exfiltrate data, delete data, install malicious software, or cause other forms of harm to a system.

The Windows print spooler, a common place of vulnerabilities, is again the focus this week as an unpatched critical flaw was uncovered when researchers accidentally published a proof-of-concept exploit. Microsoft has warned that this vulnerability is currently being exploited and the code that causes this vulnerability is in all versions of Windows. Further, the print spooler service runs by default and CISA is suggesting that businesses attempt to mitigate until a patch is released by disabling the print spooler in systems that are not used to print. This is just another example of the issues related to the windows print spooler with another famous example being the Stuxnet virus.

The remote code execution vulnerability allows an attacker to install programs, modify data, and create new users with full administrator privileges. While Microsoft has not rated this vulnerability, remote code execution is often of a severe level.

References:

BugCrowd. “Remote Code Execution.” n.d. Bugcrowd.com Remote Code Execution (RCE) | Bugcrowd

Warren, T. “Microsoft Warns of Windows ‘PrintNightmare’ Vulnerability that’s being actively Exploited.” 2 July 2021. The Verge. Microsoft warns of critical Windows ‘PrintNightmare’ vulnerability – The Verge

Week Three – Crackonosh: Hacking to Mine Cryptocurrency

Botnets come in many forms, most often for nefarious uses such as DDoS attacks. The general acceptance of cryptocurrency investing has started to change that. Are these new botnets the next generation?

While cryptocurrency has been around for a number of years, it was highly stigmatized as a means of payment for illicit activities for the majority of its existence. In its infancy, miners collected vast amounts of coins for little value and traded them away for little return, as the now famous Laszlo Hanyecz paid 10,000 bitcoins in May of 2010 for two pizzas. While undoubtedly some individuals foresaw the currency as a means of addressing issues with common fiat money, the general consensus was that mining, or holding this currency was a way to potentially draw attention to yourself from government agencies. Despite being built on a measure of autonomy, the thought persists to this day.

General knowledge of cryptocurrencies only began to emerge a few years ago when news broke that the largest cryptocurrency at that time, Bitcoin, had hit a value of $30,000 dollars per coin. The interest waned slightly as it crashed soon after. This has now become a routine aspect in cryptocurrency markets which consist of high volatility and potentially high reward. In smaller cryptocurrencies, the risk of pump and dump scams are rampant and evidenced daily. Cryptocurrencies are currently an unregulated market in the United States, though SEC and IRS interest continues to grow. As of June 27th, 2021, the combined value of cryptocurrencies are in excess of $1.3 trillion, even having hit as high as $2.5 trillion in recent months. Certainly the coronavirus pandemic played its part as many workers stayed at home and looked for new avenues of employment or means to occupy their time, driving the market to all time highs. The market has cooled slightly as pandemic restrictions have been reduced and many individuals return to general work. Despite this, the value of cryptocurrencies can not be understated and likely will become more valuable over time. Just ask Laszlo Hanyecz whose 10,000 Bitcoins as of March 14th, 2021, would have netted him a cool $613 million dollars in value.

This rampant rise in cryptocurrencies, and the subsequent drive by world populations to get a piece of the pie as it were, has created a GPU shortage, used for crypto mining, and aspiring hackers have opted to use code and botnets to gain money for themselves. The virus called Crackonosh, while not specifically a botnet as they do not operate in tandem, can effectively be considered as a distributed network of computers who are lending their resources to the code author. This virus, apparently operating stealthily since 2018, infects systems through free software downloads that contain the malicious code.

(Let this be a lesson once again, nothing is free).

Once downloaded, this virus works to remove detection systems, including replacing specific files that make the system appear to still be secure such as antivirus and Windows Defender. By replacing these, and other critical files by abusing safe mode operations and renaming files, the virus is able to establish a hold on the system and dedicate resources to mine the cryptocurrency Monero. While this vulnerability was discovered at least six months ago by security researcher Roberto Franceschetti, Microsoft did not believe it to be high enough of an issue to address. This virus has been seen in at least 30 different iterations thus far. While this is not the first malicious code of its kind, it certainly will not be the last and with these programs becoming more aware of defensive measures and avoiding or removing them, cybersecurity researchers and defenders will continue to have a difficult time in preventing these intrusions when faced with the human element.

References:

Coin Market Cap. “Global Cryptocurrency Market Cap.” Retrieved 27 June 2021. Global Cryptocurrency Market Charts | CoinMarketCap

Lakshmanan, R. “Crackonosh Virus Mined $2 Million of Monero from 222,000 Hacked Computers.” 25 June, 2021. The Hacker News. Crackonosh virus mined $2 million of Monero from 222,000 hacked computers (thehackernews.com)

Tayeb, Z. “Bitcoin’s surge beyond $60,000 means the famed programmer Laszlo Hanyecz effectively paid $613 million for 2 pizzas.” 14 March 2021. Business Insider. Bitcoin Surge Means Laszlo Hanyecz Paid $613 Million for Two Pizzas (businessinsider.com)

Week 2 -DNS Sinkholes

While researching the use of honeypots, I came across the term sinkholes. One of the best ways to increase understanding of any subject is to examine various components with which they are associated.

So, what are DNS sinkholes? How are they implemented and what are their uses?

In order to answer this question, we must understand how the internet works at a basic level. A user enters a URL, or web address, and a DNS server returns the IP address for the connection. This allows a user to refer to websites by a common name instead of the actual IP address, which may be a hassle. DNS sinkholes are used to spoof DNS servers into returning the wrong IP address, thus rerouting the user to a specified IP address (Mazerik, 2021). In this way it is possible to deny access to a website.

A DNS forwarder can be used to create and use the sinkhole. In order to implement this, the user or organization must own or have ownership rights to the domain. Once active, a sinkhole can be used by researchers to reroute DNS queries for analysis. Often used to control network flood traffic, or as a measure to control botnets, sinkholes can be used to keep access to a site active while under various forms of attack.

(Mazerik, 2021) – Infosec Resources

References:

Mazerik, R. “Understanding DNS Sinkholes: A Weapon Against Malware. 17 May 2021. Infosecinstitute.com Understanding DNS sinkholes – A weapon against malware [updated 2021] – Infosec Resources (infosecinstitute.com)

Newman, L. “Hacker Lexicon: What is Sinkholing?” 2 January 2018. Wired.com Hacker Lexicon: What Is Sinkholing? | WIRED

Week 1 Blog Entry – System Hardening Guidelines

Hysolate has come out with some system hardening guidelines for 2021. Many of these may seem to be basic information, but we still see the results of missing basic security measures implemented well.

  • These basic guidelines are effectively a small checklist that include:
  • Automatically applying OS updates
  • Removing or disabling non-essential services, software, drivers, file sharing options, and remote desktop functionality
  • Requiring strong passwords and enforcing complexity and mandatory changes
  • Logging all activities, warnings, and errors
  • Restricting unauthorized access and implementing access controls.

These are the most basic implementations and they work well for basic access, but what about user error, phishing, and visiting unapproved websites? Implementation of virtual machines can assist with separating users into sandboxed locations. The benefits of this type of computing include less hardware, and ease of disaster recovery. In addition, new virtual machines may be started in a very quick time frame and updating may be automated to provide the client with a seamless experience.

Despite the ease of use of virtual machines, human error can never be ruled out and misconfigurations, while often recoverable, can cause availability issues. Further, organizations need to ensure that any data that is subject to protections, including PII and credit card data, are secure as per legislation and defined policies.

Zlotnik, O. System Hardening Guidelines for 2021: Critical Best Practices. 5 March 2021. Hysolate. System Hardening Guidelines for 2021: Critical Best Practices – Hysolate

Kegerreis, M, Schiller, M., Davis, C. IT Auditing Using Controls to Protect Information Assets, Third Edition. 2020. McGraw-Hill Education.