Cyber security compliance in South Africa’s maritime sector
- Authors: Steenberg, Wynand
- Date: 2024-04
- Subjects: Computer security -- Management , Computer networks -- Security measures , Cyberspace -- Security measures , Shipping -- Security measures
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/65445 , vital:74152
- Description: Globally, cyber attacks on the maritime industry have escalated exponentially, indicating that cyber security should be an international priority. The majority of maritime digital systems are connected to the cloud through information technologies, creating vulnerability to cyber attacks. Cyber security is critical to protect and ensure resiliency in the maritime industry’s operations. In July 2021, a cyber attack targeted the Transnet National Ports Authority (TNPA) and caused significant disruption to operations at multiple South African ports. Port operations were reduced to manual processes, which resulted in severe congestion. Transnet declared a force majeure during the two weeks it took to reinstate minimal operations. The TNPA cyber attack emphasised the need for effective strategies and procedures to prevent and recover from cyber attacks in the South African maritime industry. As maritime transport contributes significantly to South Africa’s gross domestic product (GDP), the incident had a detrimental effect on the country’s economy. It is, therefore, important to ensure that the South African maritime industry remains aware of the latest strategies and guidelines for cyber risk management. This study set out to establish if South Africa’s maritime industry complies with international standards for cyber risk management by analysing the International Maritime Organization’s (IMO) guidelines for cyber risk management. South Africa’s cyber risk management strategies were evaluated, and recommendations were made to improve the cyber risk management strategies and procedures currently employed in the South African maritime industry. , Thesis (MPhil) -- Faculty of Business and Economic Sciences, School of Economics, Development and Tourism, 2024
- Full Text:
- Date Issued: 2024-04
- Authors: Steenberg, Wynand
- Date: 2024-04
- Subjects: Computer security -- Management , Computer networks -- Security measures , Cyberspace -- Security measures , Shipping -- Security measures
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/65445 , vital:74152
- Description: Globally, cyber attacks on the maritime industry have escalated exponentially, indicating that cyber security should be an international priority. The majority of maritime digital systems are connected to the cloud through information technologies, creating vulnerability to cyber attacks. Cyber security is critical to protect and ensure resiliency in the maritime industry’s operations. In July 2021, a cyber attack targeted the Transnet National Ports Authority (TNPA) and caused significant disruption to operations at multiple South African ports. Port operations were reduced to manual processes, which resulted in severe congestion. Transnet declared a force majeure during the two weeks it took to reinstate minimal operations. The TNPA cyber attack emphasised the need for effective strategies and procedures to prevent and recover from cyber attacks in the South African maritime industry. As maritime transport contributes significantly to South Africa’s gross domestic product (GDP), the incident had a detrimental effect on the country’s economy. It is, therefore, important to ensure that the South African maritime industry remains aware of the latest strategies and guidelines for cyber risk management. This study set out to establish if South Africa’s maritime industry complies with international standards for cyber risk management by analysing the International Maritime Organization’s (IMO) guidelines for cyber risk management. South Africa’s cyber risk management strategies were evaluated, and recommendations were made to improve the cyber risk management strategies and procedures currently employed in the South African maritime industry. , Thesis (MPhil) -- Faculty of Business and Economic Sciences, School of Economics, Development and Tourism, 2024
- Full Text:
- Date Issued: 2024-04
Incorporating emotion detection in text-dependent speaker authentication
- van Rensburg, Ebenhaeser Otto Janse, Von Solms, Rossouw
- Authors: van Rensburg, Ebenhaeser Otto Janse , Von Solms, Rossouw
- Date: 2024-04
- Subjects: Automatic speech recognition , Biometric identification , Computer networks -- Security measures , Computer networks -- Access control
- Language: English
- Type: Doctoral theses , text
- Identifier: http://hdl.handle.net/10948/64566 , vital:73767
- Description: Biometric authentication allows a person to access sensitive information using unique physical characteristics. Voice, as a biometric authentication method, is gaining popularity due to its unique characteristics and widespread availability on smartphones and other devices. It offers a secure and user-friendly alternative to traditional password-based authentication and allows a less intrusive authentication method than fingerprint authentication. Furthermore, a vast amount of information is portrayed through voice, such as age, gender, health, and emotional state. Gaining illegitimate access to information becomes significantly more difficult as biometrics are difficult to steal, and countermeasures to techniques such as replay attacks are constantly being improved. However, illegitimate access can be gained by forcing a legitimate person to authenticate themselves through voice. This study investigates how the emotion(s) carried by voice can assist in detecting if authentication was performed under duress. Knowledge is contributed using a three-phased approach: information gathering, experimentation, and deliberation. The experimentation phase is further divided into three phases to extract data, implement findings, and assess the value of determining duress using voice. This phased approach to experimentation ensures minimal change in variables and allows the drawn conclusions to be relevant to each phase. The first phase examines datasets and classifiers; the second phase explores feature enhancement techniques and their impact; and the third phase discusses performance measurements and their value to emotion detection. , Thesis (DPhil) -- Faculty Of Engineering, the Built Environment and Technology, School of Information Technology, 2024
- Full Text:
- Date Issued: 2024-04
- Authors: van Rensburg, Ebenhaeser Otto Janse , Von Solms, Rossouw
- Date: 2024-04
- Subjects: Automatic speech recognition , Biometric identification , Computer networks -- Security measures , Computer networks -- Access control
- Language: English
- Type: Doctoral theses , text
- Identifier: http://hdl.handle.net/10948/64566 , vital:73767
- Description: Biometric authentication allows a person to access sensitive information using unique physical characteristics. Voice, as a biometric authentication method, is gaining popularity due to its unique characteristics and widespread availability on smartphones and other devices. It offers a secure and user-friendly alternative to traditional password-based authentication and allows a less intrusive authentication method than fingerprint authentication. Furthermore, a vast amount of information is portrayed through voice, such as age, gender, health, and emotional state. Gaining illegitimate access to information becomes significantly more difficult as biometrics are difficult to steal, and countermeasures to techniques such as replay attacks are constantly being improved. However, illegitimate access can be gained by forcing a legitimate person to authenticate themselves through voice. This study investigates how the emotion(s) carried by voice can assist in detecting if authentication was performed under duress. Knowledge is contributed using a three-phased approach: information gathering, experimentation, and deliberation. The experimentation phase is further divided into three phases to extract data, implement findings, and assess the value of determining duress using voice. This phased approach to experimentation ensures minimal change in variables and allows the drawn conclusions to be relevant to each phase. The first phase examines datasets and classifiers; the second phase explores feature enhancement techniques and their impact; and the third phase discusses performance measurements and their value to emotion detection. , Thesis (DPhil) -- Faculty Of Engineering, the Built Environment and Technology, School of Information Technology, 2024
- Full Text:
- Date Issued: 2024-04
An exploratory investigation into an Integrated Vulnerability and Patch Management Framework
- Authors: Carstens, Duane
- Date: 2021-04
- Subjects: Computer security , Computer security -- Management , Computer networks -- Security measures , Patch Management , Integrated Vulnerability
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/177940 , vital:42892
- Description: In the rapidly changing world of cybersecurity, the constant increase of vulnerabilities continues to be a prevalent issue for many organisations. Malicious actors are aware that most organisations cannot timeously patch known vulnerabilities and are ill-prepared to protect against newly created vulnerabilities where a signature or an available patch has not yet been created. Consequently, information security personnel face ongoing challenges to mitigate these risks. In this research, the problem of remediation in a world of increasing vulnerabilities is considered. The current paradigm of vulnerability and patch management is reviewed using a pragmatic approach to all associated variables of these services / practices and, as a result, what is working and what is not working in terms of remediation is understood. In addition to the analysis, a taxonomy is created to provide a graphical representation of all associated variables to vulnerability and patch management based on existing literature. Frameworks currently being utilised in the industry to create an effective engagement model between vulnerability and patch management services are considered. The link between quantifying a threat, vulnerability and consequence; what Microsoft has available for patching; and the action plan for resulting vulnerabilities is explored. Furthermore, the processes and means of communication between each of these services are investigated to ensure there is effective remediation of vulnerabilities, ultimately improving the security risk posture of an organisation. In order to effectively measure the security risk posture, progress is measured between each of these services through a single averaged measurement metric. The outcome of the research highlights influencing factors that impact successful vulnerability management, in line with identified themes from the research taxonomy. These influencing factors are however significantly undermined due to resources within the same organisations not having a clear and consistent understanding of their role, organisational capabilities and objectives for effective vulnerability and patch management within their organisations. , Thesis (MSc) -- Faculty of Science, Computer Science, 2021
- Full Text:
- Date Issued: 2021-04
- Authors: Carstens, Duane
- Date: 2021-04
- Subjects: Computer security , Computer security -- Management , Computer networks -- Security measures , Patch Management , Integrated Vulnerability
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/177940 , vital:42892
- Description: In the rapidly changing world of cybersecurity, the constant increase of vulnerabilities continues to be a prevalent issue for many organisations. Malicious actors are aware that most organisations cannot timeously patch known vulnerabilities and are ill-prepared to protect against newly created vulnerabilities where a signature or an available patch has not yet been created. Consequently, information security personnel face ongoing challenges to mitigate these risks. In this research, the problem of remediation in a world of increasing vulnerabilities is considered. The current paradigm of vulnerability and patch management is reviewed using a pragmatic approach to all associated variables of these services / practices and, as a result, what is working and what is not working in terms of remediation is understood. In addition to the analysis, a taxonomy is created to provide a graphical representation of all associated variables to vulnerability and patch management based on existing literature. Frameworks currently being utilised in the industry to create an effective engagement model between vulnerability and patch management services are considered. The link between quantifying a threat, vulnerability and consequence; what Microsoft has available for patching; and the action plan for resulting vulnerabilities is explored. Furthermore, the processes and means of communication between each of these services are investigated to ensure there is effective remediation of vulnerabilities, ultimately improving the security risk posture of an organisation. In order to effectively measure the security risk posture, progress is measured between each of these services through a single averaged measurement metric. The outcome of the research highlights influencing factors that impact successful vulnerability management, in line with identified themes from the research taxonomy. These influencing factors are however significantly undermined due to resources within the same organisations not having a clear and consistent understanding of their role, organisational capabilities and objectives for effective vulnerability and patch management within their organisations. , Thesis (MSc) -- Faculty of Science, Computer Science, 2021
- Full Text:
- Date Issued: 2021-04
An investigation into the current state of web based cryptominers and cryptojacking
- Authors: Len, Robert
- Date: 2021-04
- Subjects: Cryptocurrencies , Malware (Computer software) , Computer networks -- Security measures , Computer networks -- Monitoring , Cryptomining , Coinhive , Cryptojacking
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/178248 , vital:42924
- Description: The aim of this research was to conduct a review of the current state and extent of surreptitious crypto mining software and its prevalence as a means for income generation. Income is generated through the use of a viewer's browser to execute custom JavaScript code to mine cryptocurrencies such as Monero and Bitcoin. The research aimed to measure the prevalence of illicit mining scripts being utilised for “in-browser" cryptojacking while further analysing the ecosystems that support the cryptomining environment. The extent of the research covers aspects such as the content (or type) of the sites hosting malicious “in-browser" cryptomining software as well as the occurrences of currencies utilised in the cryptographic mining and the analysis of cryptographic mining code samples. This research aims to compare the results of previous work with the current state of affairs since the closure of Coinhive in March 2018. Coinhive were at the time the market leader in such web based mining services. Beyond the analysis of the prevalence of cryptomining on the web today, research into the methodologies and techniques used to detect and counteract cryptomining are also conducted. This includes the most recent developments in malicious JavaScript de-obfuscation as well as cryptomining signature creation and detection. Methodologies for heuristic JavaScript behaviour identification and subsequent identification of potential malicious out-liars are also included within the research of the countermeasure analysis. The research revealed that although no longer functional, Coinhive remained as the most prevalent script being used for “in-browser" cryptomining services. While remaining the most prevalent, there was however a significant decline in overall occurrences compared to when coinhive.com was operational. Analysis of the ecosystem hosting \in-browser" mining websites was found to be distributed both geographically as well as in terms of domain categorisations. , Thesis (MSc) -- Faculty of Science, Computer Science, 2021
- Full Text:
- Date Issued: 2021-04
- Authors: Len, Robert
- Date: 2021-04
- Subjects: Cryptocurrencies , Malware (Computer software) , Computer networks -- Security measures , Computer networks -- Monitoring , Cryptomining , Coinhive , Cryptojacking
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/178248 , vital:42924
- Description: The aim of this research was to conduct a review of the current state and extent of surreptitious crypto mining software and its prevalence as a means for income generation. Income is generated through the use of a viewer's browser to execute custom JavaScript code to mine cryptocurrencies such as Monero and Bitcoin. The research aimed to measure the prevalence of illicit mining scripts being utilised for “in-browser" cryptojacking while further analysing the ecosystems that support the cryptomining environment. The extent of the research covers aspects such as the content (or type) of the sites hosting malicious “in-browser" cryptomining software as well as the occurrences of currencies utilised in the cryptographic mining and the analysis of cryptographic mining code samples. This research aims to compare the results of previous work with the current state of affairs since the closure of Coinhive in March 2018. Coinhive were at the time the market leader in such web based mining services. Beyond the analysis of the prevalence of cryptomining on the web today, research into the methodologies and techniques used to detect and counteract cryptomining are also conducted. This includes the most recent developments in malicious JavaScript de-obfuscation as well as cryptomining signature creation and detection. Methodologies for heuristic JavaScript behaviour identification and subsequent identification of potential malicious out-liars are also included within the research of the countermeasure analysis. The research revealed that although no longer functional, Coinhive remained as the most prevalent script being used for “in-browser" cryptomining services. While remaining the most prevalent, there was however a significant decline in overall occurrences compared to when coinhive.com was operational. Analysis of the ecosystem hosting \in-browser" mining websites was found to be distributed both geographically as well as in terms of domain categorisations. , Thesis (MSc) -- Faculty of Science, Computer Science, 2021
- Full Text:
- Date Issued: 2021-04
Evaluating the cyber security skills gap relating to penetration testing
- Authors: Beukes, Dirk Johannes
- Date: 2021
- Subjects: Computer networks -- Security measures , Computer networks -- Monitoring , Computer networks -- Management , Data protection , Information technology -- Security measures , Professionals -- Supply and demand , Electronic data personnel -- Supply and demand
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/171120 , vital:42021
- Description: Information Technology (IT) is growing rapidly and has become an integral part of daily life. It provides a boundless list of services and opportunities, generating boundless sources of information, which could be abused or exploited. Due to this growth, there are thousands of new users added to the grid using computer systems in a static and mobile environment; this fact alone creates endless volumes of data to be exploited and hardware devices to be abused by the wrong people. The growth in the IT environment adds challenges that may affect users in their personal, professional, and business lives. There are constant threats on corporate and private computer networks and computer systems. In the corporate environment companies try to eliminate the threat by testing networks making use of penetration tests and by implementing cyber awareness programs to make employees more aware of the cyber threat. Penetration tests and vulnerability assessments are undervalued; are seen as a formality and are not used to increase system security. If used regularly the computer system will be more secure and attacks minimized. With the growth in technology, industries all over the globe become fully dependent on information systems in doing their day-to-day business. As technology evolves and new technology becomes available, the bigger the risk becomes to protect against the dangers which come with this new technology. For industry to protect itself against this growth in technology, personnel with a certain skill set is needed. This is where cyber security plays a very important role in the protection of information systems to ensure the confidentiality, integrity and availability of the information system itself and the data on the system. Due to this drive to secure information systems, the need for cyber security by professionals is on the rise as well. It is estimated that there is a shortage of one million cyber security professionals globally. What is the reason for this skills shortage? Will it be possible to close this skills shortage gap? This study is about identifying the skills gap and identifying possible ways to close this skills gap. In this study, research was conducted on the cyber security international standards, cyber security training at universities and international certification focusing specifically on penetration testing, the evaluation of the need of industry while recruiting new penetration testers, finishing with suggestions on how to fill possible gaps in the skills market with a conclusion.
- Full Text:
- Date Issued: 2021
- Authors: Beukes, Dirk Johannes
- Date: 2021
- Subjects: Computer networks -- Security measures , Computer networks -- Monitoring , Computer networks -- Management , Data protection , Information technology -- Security measures , Professionals -- Supply and demand , Electronic data personnel -- Supply and demand
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/171120 , vital:42021
- Description: Information Technology (IT) is growing rapidly and has become an integral part of daily life. It provides a boundless list of services and opportunities, generating boundless sources of information, which could be abused or exploited. Due to this growth, there are thousands of new users added to the grid using computer systems in a static and mobile environment; this fact alone creates endless volumes of data to be exploited and hardware devices to be abused by the wrong people. The growth in the IT environment adds challenges that may affect users in their personal, professional, and business lives. There are constant threats on corporate and private computer networks and computer systems. In the corporate environment companies try to eliminate the threat by testing networks making use of penetration tests and by implementing cyber awareness programs to make employees more aware of the cyber threat. Penetration tests and vulnerability assessments are undervalued; are seen as a formality and are not used to increase system security. If used regularly the computer system will be more secure and attacks minimized. With the growth in technology, industries all over the globe become fully dependent on information systems in doing their day-to-day business. As technology evolves and new technology becomes available, the bigger the risk becomes to protect against the dangers which come with this new technology. For industry to protect itself against this growth in technology, personnel with a certain skill set is needed. This is where cyber security plays a very important role in the protection of information systems to ensure the confidentiality, integrity and availability of the information system itself and the data on the system. Due to this drive to secure information systems, the need for cyber security by professionals is on the rise as well. It is estimated that there is a shortage of one million cyber security professionals globally. What is the reason for this skills shortage? Will it be possible to close this skills shortage gap? This study is about identifying the skills gap and identifying possible ways to close this skills gap. In this study, research was conducted on the cyber security international standards, cyber security training at universities and international certification focusing specifically on penetration testing, the evaluation of the need of industry while recruiting new penetration testers, finishing with suggestions on how to fill possible gaps in the skills market with a conclusion.
- Full Text:
- Date Issued: 2021
An exploration of the overlap between open source threat intelligence and active internet background radiation
- Authors: Pearson, Deon Turner
- Date: 2020
- Subjects: Computer networks -- Security measures , Computer networks -- Monitoring , Malware (Computer software) , TCP/IP (Computer network protocol) , Open source intelligence
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/103802 , vital:32299
- Description: Organisations and individuals are facing increasing persistent threats on the Internet from worms, port scanners, and malicious software (malware). These threats are constantly evolving as attack techniques are discovered. To aid in the detection and prevention of such threats, and to stay ahead of the adversaries conducting the attacks, security specialists are utilising Threat Intelligence (TI) data in their defense strategies. TI data can be obtained from a variety of different sources such as private routers, firewall logs, public archives, and public or private network telescopes. However, at the rate and ease at which TI is produced and published, specifically Open Source Threat Intelligence (OSINT), the quality is dropping, resulting in fragmented, context-less and variable data. This research utilised two sets of TI data, a collection of OSINT and active Internet Background Radiation (IBR). The data was collected over a period of 12 months, from 37 publicly available OSINT datasets and five IBR datasets. Through the identification and analysis of common data between the OSINT and IBR datasets, this research was able to gain insight into how effective OSINT is at detecting and potentially reducing ongoing malicious Internet traffic. As part of this research, a minimal framework for the collection, processing/analysis, and distribution of OSINT was developed and tested. The research focused on exploring areas in common between the two datasets, with the intention of creating an enriched, contextualised, and reduced set of malicious source IP addresses that could be published for consumers to use in their own environment. The findings of this research pointed towards a persistent group of IP addresses observed on both datasets, over the period under research. Using these persistent IP addresses, the research was able to identify specific services being targeted. Amongst these persistent IP addresses were significant packets from Mirai like IoT Malware on port 23/tcp and 2323/tcp as well as general scanning activity on port 445/TCP.
- Full Text:
- Date Issued: 2020
- Authors: Pearson, Deon Turner
- Date: 2020
- Subjects: Computer networks -- Security measures , Computer networks -- Monitoring , Malware (Computer software) , TCP/IP (Computer network protocol) , Open source intelligence
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/103802 , vital:32299
- Description: Organisations and individuals are facing increasing persistent threats on the Internet from worms, port scanners, and malicious software (malware). These threats are constantly evolving as attack techniques are discovered. To aid in the detection and prevention of such threats, and to stay ahead of the adversaries conducting the attacks, security specialists are utilising Threat Intelligence (TI) data in their defense strategies. TI data can be obtained from a variety of different sources such as private routers, firewall logs, public archives, and public or private network telescopes. However, at the rate and ease at which TI is produced and published, specifically Open Source Threat Intelligence (OSINT), the quality is dropping, resulting in fragmented, context-less and variable data. This research utilised two sets of TI data, a collection of OSINT and active Internet Background Radiation (IBR). The data was collected over a period of 12 months, from 37 publicly available OSINT datasets and five IBR datasets. Through the identification and analysis of common data between the OSINT and IBR datasets, this research was able to gain insight into how effective OSINT is at detecting and potentially reducing ongoing malicious Internet traffic. As part of this research, a minimal framework for the collection, processing/analysis, and distribution of OSINT was developed and tested. The research focused on exploring areas in common between the two datasets, with the intention of creating an enriched, contextualised, and reduced set of malicious source IP addresses that could be published for consumers to use in their own environment. The findings of this research pointed towards a persistent group of IP addresses observed on both datasets, over the period under research. Using these persistent IP addresses, the research was able to identify specific services being targeted. Amongst these persistent IP addresses were significant packets from Mirai like IoT Malware on port 23/tcp and 2323/tcp as well as general scanning activity on port 445/TCP.
- Full Text:
- Date Issued: 2020
Securing software development using developer access control
- Authors: Ongers, Grant
- Date: 2020
- Subjects: Computer software -- Development , Computers -- Access control , Computer security -- Software , Computer networks -- Security measures , Source code (Computer science) , Plug-ins (Computer programs) , Data encryption (Computer science) , Network Access Control , Data Loss Prevention , Google’s BeyondCorp , Confidentiality, Integrity and Availability (CIA) triad
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/149022 , vital:38796
- Description: This research is aimed at software development companies and highlights the unique information security concerns in the context of a non-malicious software developer’s work environment; and furthermore explores an application driven solution which focuses specifically on providing developer environments with access control for source code repositories. In order to achieve that, five goals were defined as discussed in section 1.3. The application designed to provide the developer environment with access control to source code repositories was modelled on lessons taken from the principles of Network Access Control (NAC), Data Loss Prevention (DLP), and Google’s BeyondCorp (GBC) for zero-trust end-user computing. The intention of this research is to provide software developers with maximum access to source code without compromising Confidentiality, as per the Confidentiality, Integrity and Availability (CIA) triad. Employing data gleaned from examining the characteristics of DLP, NAC, and Beyond- Corp—proof-of-concept code was developed to regulate access to the developer’s environment and source code. The system required sufficient flexibility to support the diversity of software development environments. In order to achieve this, a modular design was selected. The system comprised a client side agent and a plug-in-ready server component. The client side agent mounts and dismounts encrypted volumes containing source code. Furthermore, it provides the server with information of the client that is demanded by plug-ins. The server side service provided encryption keys to facilitate the mounting of the volumes and, through plug-ins, asked questions of the client agent to determine whether access should be granted. The solution was then tested with integration and system testing. There were plans to have it used by development teams who were then to be surveyed as to their view on the proof of concept but this proved impossible. The conclusion provides a basis by which organisations that develop software can better balance the two corners of the CIA triad most often in conflict: Confidentiality in terms of their source code against the Availability of the same to developers.
- Full Text:
- Date Issued: 2020
- Authors: Ongers, Grant
- Date: 2020
- Subjects: Computer software -- Development , Computers -- Access control , Computer security -- Software , Computer networks -- Security measures , Source code (Computer science) , Plug-ins (Computer programs) , Data encryption (Computer science) , Network Access Control , Data Loss Prevention , Google’s BeyondCorp , Confidentiality, Integrity and Availability (CIA) triad
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/149022 , vital:38796
- Description: This research is aimed at software development companies and highlights the unique information security concerns in the context of a non-malicious software developer’s work environment; and furthermore explores an application driven solution which focuses specifically on providing developer environments with access control for source code repositories. In order to achieve that, five goals were defined as discussed in section 1.3. The application designed to provide the developer environment with access control to source code repositories was modelled on lessons taken from the principles of Network Access Control (NAC), Data Loss Prevention (DLP), and Google’s BeyondCorp (GBC) for zero-trust end-user computing. The intention of this research is to provide software developers with maximum access to source code without compromising Confidentiality, as per the Confidentiality, Integrity and Availability (CIA) triad. Employing data gleaned from examining the characteristics of DLP, NAC, and Beyond- Corp—proof-of-concept code was developed to regulate access to the developer’s environment and source code. The system required sufficient flexibility to support the diversity of software development environments. In order to achieve this, a modular design was selected. The system comprised a client side agent and a plug-in-ready server component. The client side agent mounts and dismounts encrypted volumes containing source code. Furthermore, it provides the server with information of the client that is demanded by plug-ins. The server side service provided encryption keys to facilitate the mounting of the volumes and, through plug-ins, asked questions of the client agent to determine whether access should be granted. The solution was then tested with integration and system testing. There were plans to have it used by development teams who were then to be surveyed as to their view on the proof of concept but this proved impossible. The conclusion provides a basis by which organisations that develop software can better balance the two corners of the CIA triad most often in conflict: Confidentiality in terms of their source code against the Availability of the same to developers.
- Full Text:
- Date Issued: 2020
A comparative study of CERBER, MAKTUB and LOCKY Ransomware using a Hybridised-Malware analysis
- Authors: Schmitt, Veronica
- Date: 2019
- Subjects: Microsoft Windows (Computer file) , Data protection , Computer crimes -- Prevention , Computer security , Computer networks -- Security measures , Computers -- Access control , Malware (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92313 , vital:30702
- Description: There has been a significant increase in the prevalence of Ransomware attacks in the preceding four years to date. This indicates that the battle has not yet been won defending against this class of malware. This research proposes that by identifying the similarities within the operational framework of Ransomware strains, a better overall understanding of their operation and function can be achieved. This, in turn, will aid in a quicker response to future attacks. With the average Ransomware attack taking two hours to be identified, it shows that there is not yet a clear understanding as to why these attacks are so successful. Research into Ransomware is limited by what is currently known on the topic. Due to the limitations of the research the decision was taken to only examined three samples of Ransomware from different families. This was decided due to the complexities and comprehensive nature of the research. The in depth nature of the research and the time constraints associated with it did not allow for proof of concept of this framework to be tested on more than three families, but the exploratory work was promising and should be further explored in future research. The aim of the research is to follow the Hybrid-Malware analysis framework which consists of both static and the dynamic analysis phases, in addition to the digital forensic examination of the infected system. This allows for signature-based findings, along with behavioural and forensic findings all in one. This information allows for a better understanding of how this malware is designed and how it infects and remains persistent on a system. The operating system which has been chosen is the Microsoft Window 7 operating system which is still utilised by a significant proportion of Windows users especially in the corporate environment. The experiment process was designed to enable the researcher the ability to collect information regarding the Ransomware and every aspect of its behaviour and communication on a target system. The results can be compared across the three strains to identify the commonalities. The initial hypothesis was that Ransomware variants are all much like an instant cake box consists of specific building blocks which remain the same with the flavouring of the cake mix being the unique feature.
- Full Text:
- Date Issued: 2019
- Authors: Schmitt, Veronica
- Date: 2019
- Subjects: Microsoft Windows (Computer file) , Data protection , Computer crimes -- Prevention , Computer security , Computer networks -- Security measures , Computers -- Access control , Malware (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92313 , vital:30702
- Description: There has been a significant increase in the prevalence of Ransomware attacks in the preceding four years to date. This indicates that the battle has not yet been won defending against this class of malware. This research proposes that by identifying the similarities within the operational framework of Ransomware strains, a better overall understanding of their operation and function can be achieved. This, in turn, will aid in a quicker response to future attacks. With the average Ransomware attack taking two hours to be identified, it shows that there is not yet a clear understanding as to why these attacks are so successful. Research into Ransomware is limited by what is currently known on the topic. Due to the limitations of the research the decision was taken to only examined three samples of Ransomware from different families. This was decided due to the complexities and comprehensive nature of the research. The in depth nature of the research and the time constraints associated with it did not allow for proof of concept of this framework to be tested on more than three families, but the exploratory work was promising and should be further explored in future research. The aim of the research is to follow the Hybrid-Malware analysis framework which consists of both static and the dynamic analysis phases, in addition to the digital forensic examination of the infected system. This allows for signature-based findings, along with behavioural and forensic findings all in one. This information allows for a better understanding of how this malware is designed and how it infects and remains persistent on a system. The operating system which has been chosen is the Microsoft Window 7 operating system which is still utilised by a significant proportion of Windows users especially in the corporate environment. The experiment process was designed to enable the researcher the ability to collect information regarding the Ransomware and every aspect of its behaviour and communication on a target system. The results can be compared across the three strains to identify the commonalities. The initial hypothesis was that Ransomware variants are all much like an instant cake box consists of specific building blocks which remain the same with the flavouring of the cake mix being the unique feature.
- Full Text:
- Date Issued: 2019
An analysis of the use of DNS for malicious payload distribution
- Authors: Dube, Ishmael
- Date: 2019
- Subjects: Internet domain names , Computer networks -- Security measures , Computer security , Computer network protocols , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/97531 , vital:31447
- Description: The Domain Name System (DNS) protocol is a fundamental part of Internet activities that can be abused by cybercriminals to conduct malicious activities. Previous research has shown that cybercriminals use different methods, including the DNS protocol, to distribute malicious content, remain hidden and avoid detection from various technologies that are put in place to detect anomalies. This allows botnets and certain malware families to establish covert communication channels that can be used to send or receive data and also distribute malicious payloads using the DNS queries and responses. Cybercriminals use the DNS to breach highly protected networks, distribute malicious content, and exfiltrate sensitive information without being detected by security controls put in place by embedding certain strings in DNS packets. This research undertaking broadens this research field and fills in the existing research gap by extending the analysis of DNS being used as a payload distribution channel to detection of domains that are used to distribute different malicious payloads. This research undertaking analysed the use of the DNS in detecting domains and channels that are used for distributing malicious payloads. Passive DNS data which replicate DNS queries on name servers to detect anomalies in DNS queries was evaluated and analysed in order to detect malicious payloads. The research characterises the malicious payload distribution channels by analysing passive DNS traffic and modelling the DNS query and response patterns. The research found that it is possible to detect malicious payload distribution channels through the analysis of DNS TXT resource records.
- Full Text:
- Date Issued: 2019
- Authors: Dube, Ishmael
- Date: 2019
- Subjects: Internet domain names , Computer networks -- Security measures , Computer security , Computer network protocols , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/97531 , vital:31447
- Description: The Domain Name System (DNS) protocol is a fundamental part of Internet activities that can be abused by cybercriminals to conduct malicious activities. Previous research has shown that cybercriminals use different methods, including the DNS protocol, to distribute malicious content, remain hidden and avoid detection from various technologies that are put in place to detect anomalies. This allows botnets and certain malware families to establish covert communication channels that can be used to send or receive data and also distribute malicious payloads using the DNS queries and responses. Cybercriminals use the DNS to breach highly protected networks, distribute malicious content, and exfiltrate sensitive information without being detected by security controls put in place by embedding certain strings in DNS packets. This research undertaking broadens this research field and fills in the existing research gap by extending the analysis of DNS being used as a payload distribution channel to detection of domains that are used to distribute different malicious payloads. This research undertaking analysed the use of the DNS in detecting domains and channels that are used for distributing malicious payloads. Passive DNS data which replicate DNS queries on name servers to detect anomalies in DNS queries was evaluated and analysed in order to detect malicious payloads. The research characterises the malicious payload distribution channels by analysing passive DNS traffic and modelling the DNS query and response patterns. The research found that it is possible to detect malicious payload distribution channels through the analysis of DNS TXT resource records.
- Full Text:
- Date Issued: 2019
Bolvedere: a scalable network flow threat analysis system
- Authors: Herbert, Alan
- Date: 2019
- Subjects: Bolvedere (Computer network analysis system) , Computer networks -- Scalability , Computer networks -- Measurement , Computer networks -- Security measures , Telecommunication -- Traffic -- Measurement
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/71557 , vital:29873
- Description: Since the advent of the Internet, and its public availability in the late 90’s, there have been significant advancements to network technologies and thus a significant increase of the bandwidth available to network users, both human and automated. Although this growth is of great value to network users, it has led to an increase in malicious network-based activities and it is theorized that, as more services become available on the Internet, the volume of such activities will continue to grow. Because of this, there is a need to monitor, comprehend, discern, understand and (where needed) respond to events on networks worldwide. Although this line of thought is simple in its reasoning, undertaking such a task is no small feat. Full packet analysis is a method of network surveillance that seeks out specific characteristics within network traffic that may tell of malicious activity or anomalies in regular network usage. It is carried out within firewalls and implemented through packet classification. In the context of the networks that make up the Internet, this form of packet analysis has become infeasible, as the volume of traffic introduced onto these networks every day is so large that there are simply not enough processing resources to perform such a task on every packet in real time. One could combat this problem by performing post-incident forensics; archiving packets and processing them later. However, as one cannot process all incoming packets, the archive will eventually run out of space. Full packet analysis is also hindered by the fact that some existing, commonly-used solutions are designed around a single host and single thread of execution, an outdated approach that is far slower than necessary on current computing technology. This research explores the conceptual design and implementation of a scalable network traffic analysis system named Bolvedere. Analysis performed by Bolvedere simply asks whether the existence of a connection, coupled with its associated metadata, is enough to conclude something meaningful about that connection. This idea draws away from the traditional processing of every single byte in every single packet monitored on a network link (Deep Packet Inspection) through the concept of working with connection flows. Bolvedere performs its work by leveraging the NetFlow version 9 and IPFIX protocols, but is not limited to these. It is implemented using a modular approach that allows for either complete execution of the system on a single host or the horizontal scaling out of subsystems on multiple hosts. The use of multiple hosts is achieved through the implementation of Zero Message Queue (ZMQ). This allows for Bolvedre to horizontally scale out, which results in an increase in processing resources and thus an increase in analysis throughput. This is due to ease of interprocess communications provided by ZMQ. Many underlying mechanisms in Bolvedere have been automated. This is intended to make the system more userfriendly, as the user need only tell Bolvedere what information they wish to analyse, and the system will then rebuild itself in order to achieve this required task. Bolvedere has also been hardware-accelerated through the use of Field-Programmable Gate Array (FPGA) technologies, which more than doubled the total throughput of the system.
- Full Text:
- Date Issued: 2019
- Authors: Herbert, Alan
- Date: 2019
- Subjects: Bolvedere (Computer network analysis system) , Computer networks -- Scalability , Computer networks -- Measurement , Computer networks -- Security measures , Telecommunication -- Traffic -- Measurement
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/71557 , vital:29873
- Description: Since the advent of the Internet, and its public availability in the late 90’s, there have been significant advancements to network technologies and thus a significant increase of the bandwidth available to network users, both human and automated. Although this growth is of great value to network users, it has led to an increase in malicious network-based activities and it is theorized that, as more services become available on the Internet, the volume of such activities will continue to grow. Because of this, there is a need to monitor, comprehend, discern, understand and (where needed) respond to events on networks worldwide. Although this line of thought is simple in its reasoning, undertaking such a task is no small feat. Full packet analysis is a method of network surveillance that seeks out specific characteristics within network traffic that may tell of malicious activity or anomalies in regular network usage. It is carried out within firewalls and implemented through packet classification. In the context of the networks that make up the Internet, this form of packet analysis has become infeasible, as the volume of traffic introduced onto these networks every day is so large that there are simply not enough processing resources to perform such a task on every packet in real time. One could combat this problem by performing post-incident forensics; archiving packets and processing them later. However, as one cannot process all incoming packets, the archive will eventually run out of space. Full packet analysis is also hindered by the fact that some existing, commonly-used solutions are designed around a single host and single thread of execution, an outdated approach that is far slower than necessary on current computing technology. This research explores the conceptual design and implementation of a scalable network traffic analysis system named Bolvedere. Analysis performed by Bolvedere simply asks whether the existence of a connection, coupled with its associated metadata, is enough to conclude something meaningful about that connection. This idea draws away from the traditional processing of every single byte in every single packet monitored on a network link (Deep Packet Inspection) through the concept of working with connection flows. Bolvedere performs its work by leveraging the NetFlow version 9 and IPFIX protocols, but is not limited to these. It is implemented using a modular approach that allows for either complete execution of the system on a single host or the horizontal scaling out of subsystems on multiple hosts. The use of multiple hosts is achieved through the implementation of Zero Message Queue (ZMQ). This allows for Bolvedre to horizontally scale out, which results in an increase in processing resources and thus an increase in analysis throughput. This is due to ease of interprocess communications provided by ZMQ. Many underlying mechanisms in Bolvedere have been automated. This is intended to make the system more userfriendly, as the user need only tell Bolvedere what information they wish to analyse, and the system will then rebuild itself in order to achieve this required task. Bolvedere has also been hardware-accelerated through the use of Field-Programmable Gate Array (FPGA) technologies, which more than doubled the total throughput of the system.
- Full Text:
- Date Issued: 2019
Categorising Network Telescope data using big data enrichment techniques
- Authors: Davis, Michael Reginald
- Date: 2019
- Subjects: Denial of service attacks , Big data , Computer networks -- Security measures
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92941 , vital:30766
- Description: Network Telescopes, Internet backbone sampling, IDS and other forms of network-sourced Threat Intelligence provide researchers with insight into the methods and intent of remote entities by capturing network traffic and analysing the resulting data. This analysis and determination of intent is made difficult by the large amounts of potentially malicious traffic, coupled with limited amount of knowledge that can be attributed to the source of the incoming data, as the source is known only by its IP address. Due to the lack of commonly available tooling, many researchers start this analysis from the beginning and so repeat and re-iterate previous research as the bulk of their work. As a result new insight into methods and approaches of analysis is gained at a high cost. Our research approaches this problem by using additional knowledge about the source IP address such as open ports, reverse and forward DNS, BGP routing tables and more, to enhance the researcher's ability to understand the traffic source. The research is a BigData experiment, where large (hundreds of GB) datasets are merged with a two month section of Network Telescope data using a set of Python scripts. The result are written to a Google BigQuery database table. Analysis of the network data is greatly simplified, with questions about the nature of the source, such as its device class (home routing device or server), potential vulnerabilities (open telnet ports or databases) and location becoming relatively easy to answer. Using this approach, researchers can focus on the questions that need answering and efficiently address them. This research could be taken further by using additional data sources such as Geo-location, WHOIS lookups, Threat Intelligence feeds and many others. Other potential areas of research include real-time categorisation of incoming packets, in order to better inform alerting and reporting systems' configuration. In conclusion, categorising Network Telescope data in this way provides insight into the intent of the (apparent) originator and as such is a valuable tool for those seeking to understand the purpose and intent of arriving packets. In particular, the ability to remove packets categorised as non-malicious (e.g. those in the Research category) from the data eliminates a known source of `noise' from the data. This allows the researcher to focus their efforts in a more productive manner.
- Full Text:
- Date Issued: 2019
- Authors: Davis, Michael Reginald
- Date: 2019
- Subjects: Denial of service attacks , Big data , Computer networks -- Security measures
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92941 , vital:30766
- Description: Network Telescopes, Internet backbone sampling, IDS and other forms of network-sourced Threat Intelligence provide researchers with insight into the methods and intent of remote entities by capturing network traffic and analysing the resulting data. This analysis and determination of intent is made difficult by the large amounts of potentially malicious traffic, coupled with limited amount of knowledge that can be attributed to the source of the incoming data, as the source is known only by its IP address. Due to the lack of commonly available tooling, many researchers start this analysis from the beginning and so repeat and re-iterate previous research as the bulk of their work. As a result new insight into methods and approaches of analysis is gained at a high cost. Our research approaches this problem by using additional knowledge about the source IP address such as open ports, reverse and forward DNS, BGP routing tables and more, to enhance the researcher's ability to understand the traffic source. The research is a BigData experiment, where large (hundreds of GB) datasets are merged with a two month section of Network Telescope data using a set of Python scripts. The result are written to a Google BigQuery database table. Analysis of the network data is greatly simplified, with questions about the nature of the source, such as its device class (home routing device or server), potential vulnerabilities (open telnet ports or databases) and location becoming relatively easy to answer. Using this approach, researchers can focus on the questions that need answering and efficiently address them. This research could be taken further by using additional data sources such as Geo-location, WHOIS lookups, Threat Intelligence feeds and many others. Other potential areas of research include real-time categorisation of incoming packets, in order to better inform alerting and reporting systems' configuration. In conclusion, categorising Network Telescope data in this way provides insight into the intent of the (apparent) originator and as such is a valuable tool for those seeking to understand the purpose and intent of arriving packets. In particular, the ability to remove packets categorised as non-malicious (e.g. those in the Research category) from the data eliminates a known source of `noise' from the data. This allows the researcher to focus their efforts in a more productive manner.
- Full Text:
- Date Issued: 2019
Targeted attack detection by means of free and open source solutions
- Authors: Bernardo, Louis F
- Date: 2019
- Subjects: Computer networks -- Security measures , Information technology -- Security measures , Computer security -- Management , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92269 , vital:30703
- Description: Compliance requirements are part of everyday business requirements for various areas, such as retail and medical services. As part of compliance it may be required to have infrastructure in place to monitor the activities in the environment to ensure that the relevant data and environment is sufficiently protected. At the core of such monitoring solutions one would find some type of data repository, or database, to store and ultimately correlate the captured events. Such solutions are commonly called Security Information and Event Management, or SIEM for short. Larger companies have been known to use commercial solutions such as IBM's Qradar, Logrythm, or Splunk. However, these come at significant cost and arent suitable for smaller businesses with limited budgets. These solutions require manual configuration of event correlation for detection of activities that place the environment in danger. This usually requires vendor implementation assistance that also would come at a cost. Alternatively, there are open source solutions that provide the required functionality. This research will demonstrate building an open source solution, with minimal to no cost for hardware or software, while still maintaining the capability of detecting targeted attacks. The solution presented in this research includes Wazuh, which is a combination of OSSEC and the ELK stack, integrated with an Network Intrusion Detection System (NIDS). The success of the integration, is determined by measuring postive attack detection based on each different configuration options. To perform the testing, a deliberately vulnerable platform named Metasploitable will be used as a victim host. The victim host vulnerabilities were created specifically to serve as target for Metasploit. The attacks were generated by utilising Metasploit Framework on a prebuilt Kali Linux host.
- Full Text:
- Date Issued: 2019
- Authors: Bernardo, Louis F
- Date: 2019
- Subjects: Computer networks -- Security measures , Information technology -- Security measures , Computer security -- Management , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92269 , vital:30703
- Description: Compliance requirements are part of everyday business requirements for various areas, such as retail and medical services. As part of compliance it may be required to have infrastructure in place to monitor the activities in the environment to ensure that the relevant data and environment is sufficiently protected. At the core of such monitoring solutions one would find some type of data repository, or database, to store and ultimately correlate the captured events. Such solutions are commonly called Security Information and Event Management, or SIEM for short. Larger companies have been known to use commercial solutions such as IBM's Qradar, Logrythm, or Splunk. However, these come at significant cost and arent suitable for smaller businesses with limited budgets. These solutions require manual configuration of event correlation for detection of activities that place the environment in danger. This usually requires vendor implementation assistance that also would come at a cost. Alternatively, there are open source solutions that provide the required functionality. This research will demonstrate building an open source solution, with minimal to no cost for hardware or software, while still maintaining the capability of detecting targeted attacks. The solution presented in this research includes Wazuh, which is a combination of OSSEC and the ELK stack, integrated with an Network Intrusion Detection System (NIDS). The success of the integration, is determined by measuring postive attack detection based on each different configuration options. To perform the testing, a deliberately vulnerable platform named Metasploitable will be used as a victim host. The victim host vulnerabilities were created specifically to serve as target for Metasploit. The attacks were generated by utilising Metasploit Framework on a prebuilt Kali Linux host.
- Full Text:
- Date Issued: 2019
Towards understanding and mitigating attacks leveraging zero-day exploits
- Authors: Smit, Liam
- Date: 2019
- Subjects: Computer crimes -- Prevention , Data protection , Hacking , Computer security , Computer networks -- Security measures , Computers -- Access control , Malware (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/115718 , vital:34218
- Description: Zero-day vulnerabilities are unknown and therefore not addressed with the result that they can be exploited by attackers to gain unauthorised system access. In order to understand and mitigate against attacks leveraging zero-days or unknown techniques, it is necessary to study the vulnerabilities, exploits and attacks that make use of them. In recent years there have been a number of leaks publishing such attacks using various methods to exploit vulnerabilities. This research seeks to understand what types of vulnerabilities exist, why and how these are exploited, and how to defend against such attacks by either mitigating the vulnerabilities or the method / process of exploiting them. By moving beyond merely remedying the vulnerabilities to defences that are able to prevent or detect the actions taken by attackers, the security of the information system will be better positioned to deal with future unknown threats. An interesting finding is how attackers exploit moving beyond the observable bounds to circumvent security defences, for example, compromising syslog servers, or going down to lower system rings to gain access. However, defenders can counter this by employing defences that are external to the system preventing attackers from disabling them or removing collected evidence after gaining system access. Attackers are able to defeat air-gaps via the leakage of electromagnetic radiation as well as misdirect attribution by planting false artefacts for forensic analysis and attacking from third party information systems. They analyse the methods of other attackers to learn new techniques. An example of this is the Umbrage project whereby malware is analysed to decide whether it should be implemented as a proof of concept. Another important finding is that attackers respect defence mechanisms such as: remote syslog (e.g. firewall), core dump files, database auditing, and Tripwire (e.g. SlyHeretic). These defences all have the potential to result in the attacker being discovered. Attackers must either negate the defence mechanism or find unprotected targets. Defenders can use technologies such as encryption to defend against interception and man-in-the-middle attacks. They can also employ honeytokens and honeypots to alarm misdirect, slow down and learn from attackers. By employing various tactics defenders are able to increase their chance of detecting and time to react to attacks, even those exploiting hitherto unknown vulnerabilities. To summarize the information presented in this thesis and to show the practical importance thereof, an examination is presented of the NSA's network intrusion of the SWIFT organisation. It shows that the firewalls were exploited with remote code execution zerodays. This attack has a striking parallel in the approach used in the recent VPNFilter malware. If nothing else, the leaks provide information to other actors on how to attack and what to avoid. However, by studying state actors, we can gain insight into what other actors with fewer resources can do in the future.
- Full Text:
- Date Issued: 2019
- Authors: Smit, Liam
- Date: 2019
- Subjects: Computer crimes -- Prevention , Data protection , Hacking , Computer security , Computer networks -- Security measures , Computers -- Access control , Malware (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/115718 , vital:34218
- Description: Zero-day vulnerabilities are unknown and therefore not addressed with the result that they can be exploited by attackers to gain unauthorised system access. In order to understand and mitigate against attacks leveraging zero-days or unknown techniques, it is necessary to study the vulnerabilities, exploits and attacks that make use of them. In recent years there have been a number of leaks publishing such attacks using various methods to exploit vulnerabilities. This research seeks to understand what types of vulnerabilities exist, why and how these are exploited, and how to defend against such attacks by either mitigating the vulnerabilities or the method / process of exploiting them. By moving beyond merely remedying the vulnerabilities to defences that are able to prevent or detect the actions taken by attackers, the security of the information system will be better positioned to deal with future unknown threats. An interesting finding is how attackers exploit moving beyond the observable bounds to circumvent security defences, for example, compromising syslog servers, or going down to lower system rings to gain access. However, defenders can counter this by employing defences that are external to the system preventing attackers from disabling them or removing collected evidence after gaining system access. Attackers are able to defeat air-gaps via the leakage of electromagnetic radiation as well as misdirect attribution by planting false artefacts for forensic analysis and attacking from third party information systems. They analyse the methods of other attackers to learn new techniques. An example of this is the Umbrage project whereby malware is analysed to decide whether it should be implemented as a proof of concept. Another important finding is that attackers respect defence mechanisms such as: remote syslog (e.g. firewall), core dump files, database auditing, and Tripwire (e.g. SlyHeretic). These defences all have the potential to result in the attacker being discovered. Attackers must either negate the defence mechanism or find unprotected targets. Defenders can use technologies such as encryption to defend against interception and man-in-the-middle attacks. They can also employ honeytokens and honeypots to alarm misdirect, slow down and learn from attackers. By employing various tactics defenders are able to increase their chance of detecting and time to react to attacks, even those exploiting hitherto unknown vulnerabilities. To summarize the information presented in this thesis and to show the practical importance thereof, an examination is presented of the NSA's network intrusion of the SWIFT organisation. It shows that the firewalls were exploited with remote code execution zerodays. This attack has a striking parallel in the approach used in the recent VPNFilter malware. If nothing else, the leaks provide information to other actors on how to attack and what to avoid. However, by studying state actors, we can gain insight into what other actors with fewer resources can do in the future.
- Full Text:
- Date Issued: 2019
A comparison of exact string search algorithms for deep packet inspection
- Authors: Hunt, Kieran
- Date: 2018
- Subjects: Algorithms , Firewalls (Computer security) , Computer networks -- Security measures , Intrusion detection systems (Computer security) , Deep Packet Inspection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/60629 , vital:27807
- Description: Every day, computer networks throughout the world face a constant onslaught of attacks. To combat these, network administrators are forced to employ a multitude of mitigating measures. Devices such as firewalls and Intrusion Detection Systems are prevalent today and employ extensive Deep Packet Inspection to scrutinise each piece of network traffic. Systems such as these usually require specialised hardware to meet the demand imposed by high throughput networks. Hardware like this is extremely expensive and singular in its function. It is with this in mind that the string search algorithms are introduced. These algorithms have been proven to perform well when searching through large volumes of text and may be able to perform equally well in the context of Deep Packet Inspection. String search algorithms are designed to match a single pattern to a substring of a given piece of text. This is not unlike the heuristics employed by traditional Deep Packet Inspection systems. This research compares the performance of a large number of string search algorithms during packet processing. Deep Packet Inspection places stringent restrictions on the reliability and speed of the algorithms due to increased performance pressures. A test system had to be designed in order to properly test the string search algorithms in the context of Deep Packet Inspection. The system allowed for precise and repeatable tests of each algorithm and then for their comparison. Of the algorithms tested, the Horspool and Quick Search algorithms posted the best results for both speed and reliability. The Not So Naive and Rabin-Karp algorithms were slowest overall.
- Full Text:
- Date Issued: 2018
- Authors: Hunt, Kieran
- Date: 2018
- Subjects: Algorithms , Firewalls (Computer security) , Computer networks -- Security measures , Intrusion detection systems (Computer security) , Deep Packet Inspection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/60629 , vital:27807
- Description: Every day, computer networks throughout the world face a constant onslaught of attacks. To combat these, network administrators are forced to employ a multitude of mitigating measures. Devices such as firewalls and Intrusion Detection Systems are prevalent today and employ extensive Deep Packet Inspection to scrutinise each piece of network traffic. Systems such as these usually require specialised hardware to meet the demand imposed by high throughput networks. Hardware like this is extremely expensive and singular in its function. It is with this in mind that the string search algorithms are introduced. These algorithms have been proven to perform well when searching through large volumes of text and may be able to perform equally well in the context of Deep Packet Inspection. String search algorithms are designed to match a single pattern to a substring of a given piece of text. This is not unlike the heuristics employed by traditional Deep Packet Inspection systems. This research compares the performance of a large number of string search algorithms during packet processing. Deep Packet Inspection places stringent restrictions on the reliability and speed of the algorithms due to increased performance pressures. A test system had to be designed in order to properly test the string search algorithms in the context of Deep Packet Inspection. The system allowed for precise and repeatable tests of each algorithm and then for their comparison. Of the algorithms tested, the Horspool and Quick Search algorithms posted the best results for both speed and reliability. The Not So Naive and Rabin-Karp algorithms were slowest overall.
- Full Text:
- Date Issued: 2018
A framework for malicious host fingerprinting using distributed network sensors
- Authors: Hunter, Samuel Oswald
- Date: 2018
- Subjects: Computer networks -- Security measures , Malware (Computer software) , Multisensor data fusion , Distributed Sensor Networks , Automated Reconnaissance Framework , Latency Based Multilateration
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/60653 , vital:27811
- Description: Numerous software agents exist and are responsible for increasing volumes of malicious traffic that is observed on the Internet today. From a technical perspective the existing techniques for monitoring malicious agents and traffic were not developed to allow for the interrogation of the source of malicious traffic. This interrogation or reconnaissance would be considered active analysis as opposed to existing, mostly passive analysis. Unlike passive analysis, the active techniques are time-sensitive and their results become increasingly inaccurate as time delta between observation and interrogation increases. In addition to this, some studies had shown that the geographic separation of hosts on the Internet have resulted in pockets of different malicious agents and traffic targeting victims. As such it would be important to perform any kind of data collection over various source and in distributed IP address space. The data gathering and exposure capabilities of sensors such as honeypots and network telescopes were extended through the development of near-realtime Distributed Sensor Network modules that allowed for the near-realtime analysis of malicious traffic from distributed, heterogeneous monitoring sensors. In order to utilise the data exposed by the near-realtime Distributed Sensor Network modules an Automated Reconnaissance Framework was created, this framework was tasked with active and passive information collection and analysis of data in near-realtime and was designed from an adapted Multi Sensor Data Fusion model. The hypothesis was made that if sufficiently different characteristics of a host could be identified; combined they could act as a unique fingerprint for that host, potentially allowing for the re-identification of that host, even if its IP address had changed. To this end the concept of Latency Based Multilateration was introduced, acting as an additional metric for remote host fingerprinting. The vast amount of information gathered by the AR-Framework required the development of visualisation tools which could illustrate this data in near-realtime and also provided various degrees of interaction to accommodate human interpretation of such data. Ultimately the data collected through the application of the near-realtime Distributed Sensor Network and AR-Framework provided a unique perspective of a malicious host demographic. Allowing for new correlations to be drawn between attributes such as common open ports and operating systems, location, and inferred intent of these malicious hosts. The result of which expands our current understanding of malicious hosts on the Internet and enables further research in the area.
- Full Text:
- Date Issued: 2018
- Authors: Hunter, Samuel Oswald
- Date: 2018
- Subjects: Computer networks -- Security measures , Malware (Computer software) , Multisensor data fusion , Distributed Sensor Networks , Automated Reconnaissance Framework , Latency Based Multilateration
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/60653 , vital:27811
- Description: Numerous software agents exist and are responsible for increasing volumes of malicious traffic that is observed on the Internet today. From a technical perspective the existing techniques for monitoring malicious agents and traffic were not developed to allow for the interrogation of the source of malicious traffic. This interrogation or reconnaissance would be considered active analysis as opposed to existing, mostly passive analysis. Unlike passive analysis, the active techniques are time-sensitive and their results become increasingly inaccurate as time delta between observation and interrogation increases. In addition to this, some studies had shown that the geographic separation of hosts on the Internet have resulted in pockets of different malicious agents and traffic targeting victims. As such it would be important to perform any kind of data collection over various source and in distributed IP address space. The data gathering and exposure capabilities of sensors such as honeypots and network telescopes were extended through the development of near-realtime Distributed Sensor Network modules that allowed for the near-realtime analysis of malicious traffic from distributed, heterogeneous monitoring sensors. In order to utilise the data exposed by the near-realtime Distributed Sensor Network modules an Automated Reconnaissance Framework was created, this framework was tasked with active and passive information collection and analysis of data in near-realtime and was designed from an adapted Multi Sensor Data Fusion model. The hypothesis was made that if sufficiently different characteristics of a host could be identified; combined they could act as a unique fingerprint for that host, potentially allowing for the re-identification of that host, even if its IP address had changed. To this end the concept of Latency Based Multilateration was introduced, acting as an additional metric for remote host fingerprinting. The vast amount of information gathered by the AR-Framework required the development of visualisation tools which could illustrate this data in near-realtime and also provided various degrees of interaction to accommodate human interpretation of such data. Ultimately the data collected through the application of the near-realtime Distributed Sensor Network and AR-Framework provided a unique perspective of a malicious host demographic. Allowing for new correlations to be drawn between attributes such as common open ports and operating systems, location, and inferred intent of these malicious hosts. The result of which expands our current understanding of malicious hosts on the Internet and enables further research in the area.
- Full Text:
- Date Issued: 2018
A social networking approach to security awareness in end-user cyber-driven financial transactions
- Authors: Maharaj, Rahul
- Date: 2018
- Subjects: Computer networks -- Security measures
- Language: English
- Type: Thesis , Masters , MIT
- Identifier: http://hdl.handle.net/10948/48824 , vital:41144
- Description: Cyberspace, including the internet and associated technologies have become critical to social users in their day to day lives. Social users have grown to become reliant on cyberspace and associated cyber services. As such, a culture of users becoming dependent on cyberspace has formed. This cyberculture need to ensure that they can make use of cyberspace and associated cyber services in a safe and secure manner. This is particularly true for those social users involved in cyberdriven financial transactions. Therefore, the aim of this research study is to report on research undertaken, to assist said users by providing them with an alternative educational approach to cyber security, education, awareness and training.
- Full Text:
- Authors: Maharaj, Rahul
- Date: 2018
- Subjects: Computer networks -- Security measures
- Language: English
- Type: Thesis , Masters , MIT
- Identifier: http://hdl.handle.net/10948/48824 , vital:41144
- Description: Cyberspace, including the internet and associated technologies have become critical to social users in their day to day lives. Social users have grown to become reliant on cyberspace and associated cyber services. As such, a culture of users becoming dependent on cyberspace has formed. This cyberculture need to ensure that they can make use of cyberspace and associated cyber services in a safe and secure manner. This is particularly true for those social users involved in cyberdriven financial transactions. Therefore, the aim of this research study is to report on research undertaken, to assist said users by providing them with an alternative educational approach to cyber security, education, awareness and training.
- Full Text:
An information security governance model for industrial control systems
- Authors: Webster, Zynn
- Date: 2018
- Subjects: Computer networks -- Security measures , Data protection Computer security Business enterprises -- Computer networks -- Security measures
- Language: English
- Type: Thesis , Masters , MIT
- Identifier: http://hdl.handle.net/10948/36383 , vital:33934
- Description: Industrial Control Systems (ICS) is a term used to describe several types of control systems, including Supervisory Control and Data Acquisition (SCADA) systems, Distributed Control Systems (DCS) and Programmable Logic Controllers (PLC). These systems consist of a combination of control components (e.g. electrical, mechanical, pneumatic) which act together to achieve an industrial objective (e.g., manufacturing, transportation of matter or energy). ICS play a fundamental role in critical infrastructures such as electricity grids, oil, gas and manufacturing industries. Initially ICS had little resemblance to typical enterprise IT systems; they were isolated and running proprietary control protocols using specialized hardware and software. However, with initiatives such as Industry 4.0 and Industrial Internet of Things (IIoT), the nature of ICS has changed significantly. There is an ever-increasing use of commercial operating systems and standard protocols like TCP/IP and Ethernet. Consequently, modern ICS are more and more resembling conventional enterprise IT systems, and it is a well-known fact that these IT systems and networks are known to be vulnerable and that they require extensive management to ensure Confidentiality, Integrity, and Availability. Since ICS are now adopting conventional IT characteristics they are also accepting the associated risks. However, owing to the functional area of ICS, the consequences of these threats are much more severe than those of enterprise IT systems. The need to manage security for these systems with highly skilled IT personnel has become essential. Therefore, this research was focussed to identify which unique security controls for ICS and enterprise IT systems can be combined and/or tailored to provide the organization with a single set of comprehensive security controls. By doing an investigation on existing standards and best practices for both enterprise IT and ICS environments, this study has produced a single set of security controls and presented how the security controls can be integrated into an existing information security governance model which organizations can use as a basis for generating a security framework, used not only to secure their enterprise IT systems, but also including the security of their ICS.
- Full Text:
- Date Issued: 2018
- Authors: Webster, Zynn
- Date: 2018
- Subjects: Computer networks -- Security measures , Data protection Computer security Business enterprises -- Computer networks -- Security measures
- Language: English
- Type: Thesis , Masters , MIT
- Identifier: http://hdl.handle.net/10948/36383 , vital:33934
- Description: Industrial Control Systems (ICS) is a term used to describe several types of control systems, including Supervisory Control and Data Acquisition (SCADA) systems, Distributed Control Systems (DCS) and Programmable Logic Controllers (PLC). These systems consist of a combination of control components (e.g. electrical, mechanical, pneumatic) which act together to achieve an industrial objective (e.g., manufacturing, transportation of matter or energy). ICS play a fundamental role in critical infrastructures such as electricity grids, oil, gas and manufacturing industries. Initially ICS had little resemblance to typical enterprise IT systems; they were isolated and running proprietary control protocols using specialized hardware and software. However, with initiatives such as Industry 4.0 and Industrial Internet of Things (IIoT), the nature of ICS has changed significantly. There is an ever-increasing use of commercial operating systems and standard protocols like TCP/IP and Ethernet. Consequently, modern ICS are more and more resembling conventional enterprise IT systems, and it is a well-known fact that these IT systems and networks are known to be vulnerable and that they require extensive management to ensure Confidentiality, Integrity, and Availability. Since ICS are now adopting conventional IT characteristics they are also accepting the associated risks. However, owing to the functional area of ICS, the consequences of these threats are much more severe than those of enterprise IT systems. The need to manage security for these systems with highly skilled IT personnel has become essential. Therefore, this research was focussed to identify which unique security controls for ICS and enterprise IT systems can be combined and/or tailored to provide the organization with a single set of comprehensive security controls. By doing an investigation on existing standards and best practices for both enterprise IT and ICS environments, this study has produced a single set of security controls and presented how the security controls can be integrated into an existing information security governance model which organizations can use as a basis for generating a security framework, used not only to secure their enterprise IT systems, but also including the security of their ICS.
- Full Text:
- Date Issued: 2018
Pursuing cost-effective secure network micro-segmentation
- Authors: Fürst, Mark Richard
- Date: 2018
- Subjects: Computer networks -- Security measures , Computer networks -- Access control , Firewalls (Computer security) , IPSec (Computer network protocol) , Network micro-segmentation
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/131106 , vital:36524
- Description: Traditional network segmentation allows discrete trust levels to be defined for different network segments, using physical firewalls or routers that control north-south traffic flowing between different interfaces. This technique reduces the attack surface area should an attacker breach one of the perimeter defences. However, east-west traffic flowing between endpoints within the same network segment does not pass through a firewall, and an attacker may be able to move laterally between endpoints within that segment. Network micro-segmentation was designed to address the challenge of controlling east-west traffic, and various solutions have been released with differing levels of capabilities and feature sets. These approaches range from simple network switch Access Control List based segmentation to complex hypervisor based software-defined security segments defined down to the individual workload, container or process level, and enforced via policy based security controls for each segment. Several commercial solutions for network micro-segmentation exist, but these are primarily focused on physical and cloud data centres, and are often accompanied by significant capital outlay and resource requirements. Given these constraints, this research determines whether existing tools provided with operating systems can be re-purposed to implement micro-segmentation and restrict east-west traffic within one or more network segments for a small-to-medium sized corporate network. To this end, a proof-of-concept lab environment was built with a heterogeneous mix of Windows and Linux virtual servers and workstations deployed in an Active Directory domain. The use of Group Policy Objects to deploy IPsec Server and Domain Isolation for controlling traffic between endpoints is examined, in conjunction with IPsec Authenticated Header and Encapsulating Security Payload modes as an additional layer of security. The outcome of the research shows that revisiting existing tools can enable organisations to implement an additional, cost-effective secure layer of defence in their network.
- Full Text:
- Date Issued: 2018
- Authors: Fürst, Mark Richard
- Date: 2018
- Subjects: Computer networks -- Security measures , Computer networks -- Access control , Firewalls (Computer security) , IPSec (Computer network protocol) , Network micro-segmentation
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/131106 , vital:36524
- Description: Traditional network segmentation allows discrete trust levels to be defined for different network segments, using physical firewalls or routers that control north-south traffic flowing between different interfaces. This technique reduces the attack surface area should an attacker breach one of the perimeter defences. However, east-west traffic flowing between endpoints within the same network segment does not pass through a firewall, and an attacker may be able to move laterally between endpoints within that segment. Network micro-segmentation was designed to address the challenge of controlling east-west traffic, and various solutions have been released with differing levels of capabilities and feature sets. These approaches range from simple network switch Access Control List based segmentation to complex hypervisor based software-defined security segments defined down to the individual workload, container or process level, and enforced via policy based security controls for each segment. Several commercial solutions for network micro-segmentation exist, but these are primarily focused on physical and cloud data centres, and are often accompanied by significant capital outlay and resource requirements. Given these constraints, this research determines whether existing tools provided with operating systems can be re-purposed to implement micro-segmentation and restrict east-west traffic within one or more network segments for a small-to-medium sized corporate network. To this end, a proof-of-concept lab environment was built with a heterogeneous mix of Windows and Linux virtual servers and workstations deployed in an Active Directory domain. The use of Group Policy Objects to deploy IPsec Server and Domain Isolation for controlling traffic between endpoints is examined, in conjunction with IPsec Authenticated Header and Encapsulating Security Payload modes as an additional layer of security. The outcome of the research shows that revisiting existing tools can enable organisations to implement an additional, cost-effective secure layer of defence in their network.
- Full Text:
- Date Issued: 2018
Towards a threat assessment framework for consumer health wearables
- Authors: Mnjama, Javan Joshua
- Date: 2018
- Subjects: Activity trackers (Wearable technology) , Computer networks -- Security measures , Data protection , Information storage and retrieval systems -- Security systems , Computer security -- Software , Consumer Health Wearable Threat Assessment Framework , Design Science Research
- Language: English
- Type: text , Thesis , Masters , MCom
- Identifier: http://hdl.handle.net/10962/62649 , vital:28225
- Description: The collection of health data such as physical activity, consumption and physiological data through the use of consumer health wearables via fitness trackers are very beneficial for the promotion of physical wellness. However, consumer health wearables and their associated applications are known to have privacy and security concerns that can potentially make the collected personal health data vulnerable to hackers. These concerns are attributed to security theoretical frameworks not sufficiently addressing the entirety of privacy and security concerns relating to the diverse technological ecosystem of consumer health wearables. The objective of this research was therefore to develop a threat assessment framework that can be used to guide the detection of vulnerabilities which affect consumer health wearables and their associated applications. To meet this objective, the Design Science Research methodology was used to develop the desired artefact (Consumer Health Wearable Threat Assessment Framework). The framework is comprised of fourteen vulnerabilities classified according to Authentication, Authorization, Availability, Confidentiality, Non-Repudiation and Integrity. Through developing the artefact, the threat assessment framework was demonstrated on two fitness trackers and their associated applications. It was discovered, that the framework was able to identify how these vulnerabilities affected, these two test cases based on the classification categories of the framework. The framework was also evaluated by four security experts who assessed the quality, utility and efficacy of the framework. Experts, supported the use of the framework as a relevant and comprehensive framework to guide the detection of vulnerabilities towards consumer health wearables and their associated applications. The implication of this research study is that the framework can be used by developers to better identify the vulnerabilities of consumer health wearables and their associated applications. This will assist in creating a more securer environment for the storage and use of health data by consumer health wearables.
- Full Text:
- Date Issued: 2018
- Authors: Mnjama, Javan Joshua
- Date: 2018
- Subjects: Activity trackers (Wearable technology) , Computer networks -- Security measures , Data protection , Information storage and retrieval systems -- Security systems , Computer security -- Software , Consumer Health Wearable Threat Assessment Framework , Design Science Research
- Language: English
- Type: text , Thesis , Masters , MCom
- Identifier: http://hdl.handle.net/10962/62649 , vital:28225
- Description: The collection of health data such as physical activity, consumption and physiological data through the use of consumer health wearables via fitness trackers are very beneficial for the promotion of physical wellness. However, consumer health wearables and their associated applications are known to have privacy and security concerns that can potentially make the collected personal health data vulnerable to hackers. These concerns are attributed to security theoretical frameworks not sufficiently addressing the entirety of privacy and security concerns relating to the diverse technological ecosystem of consumer health wearables. The objective of this research was therefore to develop a threat assessment framework that can be used to guide the detection of vulnerabilities which affect consumer health wearables and their associated applications. To meet this objective, the Design Science Research methodology was used to develop the desired artefact (Consumer Health Wearable Threat Assessment Framework). The framework is comprised of fourteen vulnerabilities classified according to Authentication, Authorization, Availability, Confidentiality, Non-Repudiation and Integrity. Through developing the artefact, the threat assessment framework was demonstrated on two fitness trackers and their associated applications. It was discovered, that the framework was able to identify how these vulnerabilities affected, these two test cases based on the classification categories of the framework. The framework was also evaluated by four security experts who assessed the quality, utility and efficacy of the framework. Experts, supported the use of the framework as a relevant and comprehensive framework to guide the detection of vulnerabilities towards consumer health wearables and their associated applications. The implication of this research study is that the framework can be used by developers to better identify the vulnerabilities of consumer health wearables and their associated applications. This will assist in creating a more securer environment for the storage and use of health data by consumer health wearables.
- Full Text:
- Date Issued: 2018
A national strategy towards cultivating a cybersecurity culture in South Africa
- Authors: Gcaza, Noluxolo
- Date: 2017
- Subjects: Computer networks -- Security measures , Cyberspace -- Security measures Computer security -- South Africa Subculture -- South Africa
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10948/13735 , vital:27303
- Description: In modern society, cyberspace is interwoven into the daily lives of many. Cyberspace is increasingly redefining how people communicate as well as gain access to and share information. Technology has transformed the way the business world operates by introducing new ways of trading goods and services whilst bolstering traditional business methods. It has also altered the way nations govern. Thus individuals, organisations and nations are relying on this technology to perform significant functions. Alongside the positive innovations afforded by cyberspace, however, those who use it are exposed to a variety of risks. Cyberspace is beset by criminal activities such as cybercrime, fraud, identity theft to name but a few. Nonetheless, the negative impact of these cyber threats does not outweigh the advantages of cyberspace. In light of such threats, there is a call for all entities that reap the benefits of online services to institute cybersecurity. As such, cybersecurity is a necessity for individuals, organisations and nations alike. In practice, cybersecurity focuses on preventing and mitigating certain security risks that might compromise the security of relevant assets. For a long time, technology-centred measures have been deemed the most significant solution for mitigating such risks. However, after a legacy of unsuccessful technological efforts, it became clear that such solutions in isolation are insufficient to mitigate all cyber-related risks. This is mainly due to the role that humans play in the security process, that is, the human factor. In isolation, technology-centred measures tend to fail to counter the human factor because of the perception among many users that security measures are an obstacle and consequently a waste of time. This user perception can be credited to the perceived difficulty of the security measure, as well as apparent mistrust and misinterpretation of the measure. Hence, cybersecurity necessitates the development of a solution that encourages acceptable user behaviour in the reality of cyberspace. The cultivation of a cybersecurity culture is thus regarded as the best approach for addressing the human factors that weaken the cybersecurity chain. While the role of culture in pursuing cybersecurity is well appreciated, research focusing on defining and measuring cybersecurity culture is still in its infancy. Furthermore, studies have shown that there are no widely accepted key concepts that delimit a cybersecurity culture. However, the notion that such a culture is not well-delineated has not prevented national governments from pursuing a culture in which all citizens behave in a way that promotes cybersecurity. As a result, many countries now offer national cybersecurity campaigns to foster a culture of cybersecurity at a national level. South Africa is among the nations that have identified cultivating a culture of cybersecurity as a strategic priority. However, there is an apparent lack of a practical plan to cultivate such a cybersecurity culture in South Africa. Thus, this study sought firstly to confirm from the existing body of knowledge that cybersecurity culture is indeed ill-defined and, secondly, to delineate what constitutes a national cybersecurity culture. Finally, and primarily, it sought to devise a national strategy that would assist SA in fulfilling its objective of cultivating a culture of cybersecurity on a national level.
- Full Text:
- Date Issued: 2017
- Authors: Gcaza, Noluxolo
- Date: 2017
- Subjects: Computer networks -- Security measures , Cyberspace -- Security measures Computer security -- South Africa Subculture -- South Africa
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10948/13735 , vital:27303
- Description: In modern society, cyberspace is interwoven into the daily lives of many. Cyberspace is increasingly redefining how people communicate as well as gain access to and share information. Technology has transformed the way the business world operates by introducing new ways of trading goods and services whilst bolstering traditional business methods. It has also altered the way nations govern. Thus individuals, organisations and nations are relying on this technology to perform significant functions. Alongside the positive innovations afforded by cyberspace, however, those who use it are exposed to a variety of risks. Cyberspace is beset by criminal activities such as cybercrime, fraud, identity theft to name but a few. Nonetheless, the negative impact of these cyber threats does not outweigh the advantages of cyberspace. In light of such threats, there is a call for all entities that reap the benefits of online services to institute cybersecurity. As such, cybersecurity is a necessity for individuals, organisations and nations alike. In practice, cybersecurity focuses on preventing and mitigating certain security risks that might compromise the security of relevant assets. For a long time, technology-centred measures have been deemed the most significant solution for mitigating such risks. However, after a legacy of unsuccessful technological efforts, it became clear that such solutions in isolation are insufficient to mitigate all cyber-related risks. This is mainly due to the role that humans play in the security process, that is, the human factor. In isolation, technology-centred measures tend to fail to counter the human factor because of the perception among many users that security measures are an obstacle and consequently a waste of time. This user perception can be credited to the perceived difficulty of the security measure, as well as apparent mistrust and misinterpretation of the measure. Hence, cybersecurity necessitates the development of a solution that encourages acceptable user behaviour in the reality of cyberspace. The cultivation of a cybersecurity culture is thus regarded as the best approach for addressing the human factors that weaken the cybersecurity chain. While the role of culture in pursuing cybersecurity is well appreciated, research focusing on defining and measuring cybersecurity culture is still in its infancy. Furthermore, studies have shown that there are no widely accepted key concepts that delimit a cybersecurity culture. However, the notion that such a culture is not well-delineated has not prevented national governments from pursuing a culture in which all citizens behave in a way that promotes cybersecurity. As a result, many countries now offer national cybersecurity campaigns to foster a culture of cybersecurity at a national level. South Africa is among the nations that have identified cultivating a culture of cybersecurity as a strategic priority. However, there is an apparent lack of a practical plan to cultivate such a cybersecurity culture in South Africa. Thus, this study sought firstly to confirm from the existing body of knowledge that cybersecurity culture is indeed ill-defined and, secondly, to delineate what constitutes a national cybersecurity culture. Finally, and primarily, it sought to devise a national strategy that would assist SA in fulfilling its objective of cultivating a culture of cybersecurity on a national level.
- Full Text:
- Date Issued: 2017