The effective combating of intrusion attacks through fuzzy logic and neural networks
- Authors: Goss, Robert Melvin
- Date: 2007
- Subjects: Computer security , Fuzzy logic , Neural networks (Computer science)
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9794 , http://hdl.handle.net/10948/512 , http://hdl.handle.net/10948/d1011917 , Computer security , Fuzzy logic , Neural networks (Computer science)
- Description: The importance of properly securing an organization’s information and computing resources has become paramount in modern business. Since the advent of the Internet, securing this organizational information has become increasingly difficult. Organizations deploy many security mechanisms in the protection of their data, intrusion detection systems in particular have an increasingly valuable role to play, and as networks grow, administrators need better ways to monitor their systems. Currently, many intrusion detection systems lack the means to accurately monitor and report on wireless segments within the corporate network. This dissertation proposes an extension to the NeGPAIM model, known as NeGPAIM-W, which allows for the accurate detection of attacks originating on wireless network segments. The NeGPAIM-W model is able to detect both wired and wireless based attacks, and with the extensions to the original model mentioned previously, also provide for correlation of intrusion attacks sourced on both wired and wireless network segments. This provides for a holistic detection strategy for an organization. This has been accomplished with the use of Fuzzy logic and neural networks utilized in the detection of attacks. The model works on the assumption that each user has, and leaves, a unique footprint on a computer system. Thus, all intrusive behaviour on the system and networks which support it, can be traced back to the user account which was used to perform the intrusive behavior.
- Full Text:
- Date Issued: 2007
- Authors: Goss, Robert Melvin
- Date: 2007
- Subjects: Computer security , Fuzzy logic , Neural networks (Computer science)
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9794 , http://hdl.handle.net/10948/512 , http://hdl.handle.net/10948/d1011917 , Computer security , Fuzzy logic , Neural networks (Computer science)
- Description: The importance of properly securing an organization’s information and computing resources has become paramount in modern business. Since the advent of the Internet, securing this organizational information has become increasingly difficult. Organizations deploy many security mechanisms in the protection of their data, intrusion detection systems in particular have an increasingly valuable role to play, and as networks grow, administrators need better ways to monitor their systems. Currently, many intrusion detection systems lack the means to accurately monitor and report on wireless segments within the corporate network. This dissertation proposes an extension to the NeGPAIM model, known as NeGPAIM-W, which allows for the accurate detection of attacks originating on wireless network segments. The NeGPAIM-W model is able to detect both wired and wireless based attacks, and with the extensions to the original model mentioned previously, also provide for correlation of intrusion attacks sourced on both wired and wireless network segments. This provides for a holistic detection strategy for an organization. This has been accomplished with the use of Fuzzy logic and neural networks utilized in the detection of attacks. The model works on the assumption that each user has, and leaves, a unique footprint on a computer system. Thus, all intrusive behaviour on the system and networks which support it, can be traced back to the user account which was used to perform the intrusive behavior.
- Full Text:
- Date Issued: 2007
NetwIOC: a framework for the automated generation of network-based IOCS for malware information sharing and defence
- Authors: Rudman, Lauren Lynne
- Date: 2018
- Subjects: Malware (Computer software) , Computer networks Security measures , Computer security , Python (Computer program language)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/60639 , vital:27809
- Description: With the substantial number of new malware variants found each day, it is useful to have an efficient way to retrieve Indicators of Compromise (IOCs) from the malware in a format suitable for sharing and detection. In the past, these indicators were manually created after inspection of binary samples and network traffic. The Cuckoo Sandbox, is an existing dynamic malware analysis system which meets the requirements for the proposed framework and was extended by adding a few custom modules. This research explored a way to automate the generation of detailed network-based IOCs in a popular format which can be used for sharing. This was done through careful filtering and analysis of the PCAP hie generated by the sandbox, and placing these values into the correct type of STIX objects using Python, Through several evaluations, analysis of what type of network traffic can be expected for the creation of IOCs was conducted, including a brief ease study that examined the effect of analysis time on the number of IOCs created. Using the automatically generated IOCs to create defence and detection mechanisms for the network was evaluated and proved successful, A proof of concept sharing platform developed for the STIX IOCs is showcased at the end of the research.
- Full Text:
- Date Issued: 2018
- Authors: Rudman, Lauren Lynne
- Date: 2018
- Subjects: Malware (Computer software) , Computer networks Security measures , Computer security , Python (Computer program language)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/60639 , vital:27809
- Description: With the substantial number of new malware variants found each day, it is useful to have an efficient way to retrieve Indicators of Compromise (IOCs) from the malware in a format suitable for sharing and detection. In the past, these indicators were manually created after inspection of binary samples and network traffic. The Cuckoo Sandbox, is an existing dynamic malware analysis system which meets the requirements for the proposed framework and was extended by adding a few custom modules. This research explored a way to automate the generation of detailed network-based IOCs in a popular format which can be used for sharing. This was done through careful filtering and analysis of the PCAP hie generated by the sandbox, and placing these values into the correct type of STIX objects using Python, Through several evaluations, analysis of what type of network traffic can be expected for the creation of IOCs was conducted, including a brief ease study that examined the effect of analysis time on the number of IOCs created. Using the automatically generated IOCs to create defence and detection mechanisms for the network was evaluated and proved successful, A proof of concept sharing platform developed for the STIX IOCs is showcased at the end of the research.
- Full Text:
- Date Issued: 2018
Towards a collection of cost-effective technologies in support of the NIST cybersecurity framework
- Shackleton, Bruce Michael Stuart
- Authors: Shackleton, Bruce Michael Stuart
- Date: 2018
- Subjects: National Institute of Standards and Technology (U.S.) , Computer security , Computer networks Security measures , Small business Information technology Cost effectiveness , Open source software
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/62494 , vital:28199
- Description: The NIST Cybersecurity Framework (CSF) is a specific risk and cybersecurity framework. It provides guidance on controls that can be implemented to help improve an organisation’s cybersecurity risk posture. The CSF Functions consist of Identify, Protect, Detect, Respond, and Recover. Like most Information Technology (IT) frameworks, there are elements of people, processes, and technology. The same elements are required to successfully implement the NIST CSF. This research specifically focuses on the technology element. While there are many commercial technologies available for a small to medium sized business, the costs can be prohibitively expensive. Therefore, this research investigates cost-effective technologies and assesses their alignment to the NIST CSF. The assessment was made against the NIST CSF subcategories. Each subcategory was analysed to identify where a technology would likely be required. The framework provides a list of Informative References. These Informative References were used to create high- level technology categories, as well as identify the technical controls against which the technologies were measured. The technologies tested were either open source or proprietary. All open source technologies tested were free to use, or have a free community edition. Proprietary technologies would be free to use, or considered generally available to most organisations, such as components contained within Microsoft platforms. The results from the experimentation demonstrated that there are multiple cost-effective technologies that can support the NIST CSF. Once all technologies were tested, the NIST CSF was extended. Two new columns were added, namely high-level technology category, and tested technology. The columns were populated with output from the research. This extended framework begins an initial collection of cost-effective technologies in support of the NIST CSF.
- Full Text:
- Date Issued: 2018
- Authors: Shackleton, Bruce Michael Stuart
- Date: 2018
- Subjects: National Institute of Standards and Technology (U.S.) , Computer security , Computer networks Security measures , Small business Information technology Cost effectiveness , Open source software
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/62494 , vital:28199
- Description: The NIST Cybersecurity Framework (CSF) is a specific risk and cybersecurity framework. It provides guidance on controls that can be implemented to help improve an organisation’s cybersecurity risk posture. The CSF Functions consist of Identify, Protect, Detect, Respond, and Recover. Like most Information Technology (IT) frameworks, there are elements of people, processes, and technology. The same elements are required to successfully implement the NIST CSF. This research specifically focuses on the technology element. While there are many commercial technologies available for a small to medium sized business, the costs can be prohibitively expensive. Therefore, this research investigates cost-effective technologies and assesses their alignment to the NIST CSF. The assessment was made against the NIST CSF subcategories. Each subcategory was analysed to identify where a technology would likely be required. The framework provides a list of Informative References. These Informative References were used to create high- level technology categories, as well as identify the technical controls against which the technologies were measured. The technologies tested were either open source or proprietary. All open source technologies tested were free to use, or have a free community edition. Proprietary technologies would be free to use, or considered generally available to most organisations, such as components contained within Microsoft platforms. The results from the experimentation demonstrated that there are multiple cost-effective technologies that can support the NIST CSF. Once all technologies were tested, the NIST CSF was extended. Two new columns were added, namely high-level technology category, and tested technology. The columns were populated with output from the research. This extended framework begins an initial collection of cost-effective technologies in support of the NIST CSF.
- Full Text:
- Date Issued: 2018
An analysis of the use of DNS for malicious payload distribution
- Authors: Dube, Ishmael
- Date: 2019
- Subjects: Internet domain names , Computer networks -- Security measures , Computer security , Computer network protocols , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/97531 , vital:31447
- Description: The Domain Name System (DNS) protocol is a fundamental part of Internet activities that can be abused by cybercriminals to conduct malicious activities. Previous research has shown that cybercriminals use different methods, including the DNS protocol, to distribute malicious content, remain hidden and avoid detection from various technologies that are put in place to detect anomalies. This allows botnets and certain malware families to establish covert communication channels that can be used to send or receive data and also distribute malicious payloads using the DNS queries and responses. Cybercriminals use the DNS to breach highly protected networks, distribute malicious content, and exfiltrate sensitive information without being detected by security controls put in place by embedding certain strings in DNS packets. This research undertaking broadens this research field and fills in the existing research gap by extending the analysis of DNS being used as a payload distribution channel to detection of domains that are used to distribute different malicious payloads. This research undertaking analysed the use of the DNS in detecting domains and channels that are used for distributing malicious payloads. Passive DNS data which replicate DNS queries on name servers to detect anomalies in DNS queries was evaluated and analysed in order to detect malicious payloads. The research characterises the malicious payload distribution channels by analysing passive DNS traffic and modelling the DNS query and response patterns. The research found that it is possible to detect malicious payload distribution channels through the analysis of DNS TXT resource records.
- Full Text:
- Date Issued: 2019
- Authors: Dube, Ishmael
- Date: 2019
- Subjects: Internet domain names , Computer networks -- Security measures , Computer security , Computer network protocols , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/97531 , vital:31447
- Description: The Domain Name System (DNS) protocol is a fundamental part of Internet activities that can be abused by cybercriminals to conduct malicious activities. Previous research has shown that cybercriminals use different methods, including the DNS protocol, to distribute malicious content, remain hidden and avoid detection from various technologies that are put in place to detect anomalies. This allows botnets and certain malware families to establish covert communication channels that can be used to send or receive data and also distribute malicious payloads using the DNS queries and responses. Cybercriminals use the DNS to breach highly protected networks, distribute malicious content, and exfiltrate sensitive information without being detected by security controls put in place by embedding certain strings in DNS packets. This research undertaking broadens this research field and fills in the existing research gap by extending the analysis of DNS being used as a payload distribution channel to detection of domains that are used to distribute different malicious payloads. This research undertaking analysed the use of the DNS in detecting domains and channels that are used for distributing malicious payloads. Passive DNS data which replicate DNS queries on name servers to detect anomalies in DNS queries was evaluated and analysed in order to detect malicious payloads. The research characterises the malicious payload distribution channels by analysing passive DNS traffic and modelling the DNS query and response patterns. The research found that it is possible to detect malicious payload distribution channels through the analysis of DNS TXT resource records.
- Full Text:
- Date Issued: 2019
Establishing an information security culture in organizations : an outcomes based education approach
- Van Niekerk, Johannes Frederick
- Authors: Van Niekerk, Johannes Frederick
- Date: 2005
- Subjects: Computer security , Management information systems -- Security measures , Competency-based education
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9742 , http://hdl.handle.net/10948/164 , Computer security , Management information systems -- Security measures , Competency-based education
- Description: Information security is crucial to the continuous well-being of modern orga- nizations. Humans play a signfiicant role in the processes needed to secure an organization's information resources. Without an adequate level of user co-operation and knowledge, many security techniques are liable to be misused or misinterpreted by users. This may result in an adequate security measure becoming inadequate. It is therefor necessary to educate the orga- nization's employees regarding information security and also to establish a corporate sub-culture of information security in the organization, which will ensure that the employees have the correct attitude towards their security responsibilities. Current information security education programs fails to pay su±cient attention to the behavioral sciences. There also exist a lack of knowledge regarding the principles, and processes, that would be needed for the establishment of an corporate sub-culture, specific to information security. Without both the necessary knowledge, and the desired attitude amongst the employee, it will be impossible to guarantee that the organi- zation's information resources are secure. It would therefor make sense to address both these dimensions to the human factor in information security, using a single integrated, holistic approach. This dissertation presents such an approach, which is based on an integration of sound behavioral theories.
- Full Text:
- Date Issued: 2005
Establishing an information security culture in organizations : an outcomes based education approach
- Authors: Van Niekerk, Johannes Frederick
- Date: 2005
- Subjects: Computer security , Management information systems -- Security measures , Competency-based education
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9742 , http://hdl.handle.net/10948/164 , Computer security , Management information systems -- Security measures , Competency-based education
- Description: Information security is crucial to the continuous well-being of modern orga- nizations. Humans play a signfiicant role in the processes needed to secure an organization's information resources. Without an adequate level of user co-operation and knowledge, many security techniques are liable to be misused or misinterpreted by users. This may result in an adequate security measure becoming inadequate. It is therefor necessary to educate the orga- nization's employees regarding information security and also to establish a corporate sub-culture of information security in the organization, which will ensure that the employees have the correct attitude towards their security responsibilities. Current information security education programs fails to pay su±cient attention to the behavioral sciences. There also exist a lack of knowledge regarding the principles, and processes, that would be needed for the establishment of an corporate sub-culture, specific to information security. Without both the necessary knowledge, and the desired attitude amongst the employee, it will be impossible to guarantee that the organi- zation's information resources are secure. It would therefor make sense to address both these dimensions to the human factor in information security, using a single integrated, holistic approach. This dissertation presents such an approach, which is based on an integration of sound behavioral theories.
- Full Text:
- Date Issued: 2005
Introducing hippocratic log files for personal privacy control
- Authors: Rutherford, Andrew
- Date: 2005
- Subjects: Computer security , Internet -- Security measures
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9743 , http://hdl.handle.net/10948/171 , Computer security , Internet -- Security measures
- Description: The rapid growth of the Internet has served to intensify existing privacy concerns of the individual, to the point that privacy is the number one concern amongst Internet users today. Tools exist that can provide users with a choice of anonymity or pseudonymity. However, many Web transactions require the release of personally identifying information, thus rendering such tools infeasible in many instances. Since it is then a given that users are often required to release personal information, which could be recorded, it follows that they require a greater degree of control over the information they release. Hippocratic databases, designed by Agrawal, Kiernan, Srikant, and Xu (2002), aim to give users greater control over information stored in a data- base. Their design was inspired by the medical Hippocratic oath, and makes data privacy protection a fundamental responsibility of the database itself. To achieve the privacy of data, Hippocratic databases are governed by 10 key privacy principles. This dissertation argues, that asides from a few challenges, the 10 prin- ciples of Hippocratic databases can be applied to log ¯les. This argument is supported by presenting a high-level functional view of a Hippocratic log file architecture. This architecture focuses on issues that highlight the con- trol users gain over their personal information that is collected in log files. By presenting a layered view of the aforementioned architecture, it was, fur- thermore, possible to provide greater insight into the major processes that would be at work in a Hippocratic log file implementation. An exploratory prototype served to understand and demonstrate certain of the architectural components of Hippocratic log files. This dissertation, thus, makes a contribution to the ideal of providing users with greater control over their personal information, by proposing the use of Hippocratic logfiles.
- Full Text:
- Date Issued: 2005
- Authors: Rutherford, Andrew
- Date: 2005
- Subjects: Computer security , Internet -- Security measures
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9743 , http://hdl.handle.net/10948/171 , Computer security , Internet -- Security measures
- Description: The rapid growth of the Internet has served to intensify existing privacy concerns of the individual, to the point that privacy is the number one concern amongst Internet users today. Tools exist that can provide users with a choice of anonymity or pseudonymity. However, many Web transactions require the release of personally identifying information, thus rendering such tools infeasible in many instances. Since it is then a given that users are often required to release personal information, which could be recorded, it follows that they require a greater degree of control over the information they release. Hippocratic databases, designed by Agrawal, Kiernan, Srikant, and Xu (2002), aim to give users greater control over information stored in a data- base. Their design was inspired by the medical Hippocratic oath, and makes data privacy protection a fundamental responsibility of the database itself. To achieve the privacy of data, Hippocratic databases are governed by 10 key privacy principles. This dissertation argues, that asides from a few challenges, the 10 prin- ciples of Hippocratic databases can be applied to log ¯les. This argument is supported by presenting a high-level functional view of a Hippocratic log file architecture. This architecture focuses on issues that highlight the con- trol users gain over their personal information that is collected in log files. By presenting a layered view of the aforementioned architecture, it was, fur- thermore, possible to provide greater insight into the major processes that would be at work in a Hippocratic log file implementation. An exploratory prototype served to understand and demonstrate certain of the architectural components of Hippocratic log files. This dissertation, thus, makes a contribution to the ideal of providing users with greater control over their personal information, by proposing the use of Hippocratic logfiles.
- Full Text:
- Date Issued: 2005
Towards a capability maturity model for a cyber range
- Authors: Aschmann, Michael Joseph
- Date: 2020
- Subjects: Computer software -- Development , Computer security
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/163142 , vital:41013
- Description: This work describes research undertaken towards the development of a Capability Maturity Model (CMM) for Cyber Ranges (CRs) focused on cyber security. Global cyber security needs are on the rise, and the need for attribution within the cyber domain is of particular concern. This has prompted major efforts to enhance cyber capabilities within organisations to increase their total cyber resilience posture. These efforts include, but are not limited to, the testing of computational devices, networks, and applications, and cyber skills training focused on prevention, detection and cyber attack response. A cyber range allows for the testing of the computational environment. By developing cyber events within a confined virtual or sand-boxed cyber environment, a cyber range can prepare the next generation of cyber security specialists to handle a variety of potential cyber attacks. Cyber ranges have different purposes, each designed to fulfil a different computational testing and cyber training goal; consequently, cyber ranges can vary greatly in the level of variety, capability, maturity and complexity. As cyber ranges proliferate and become more and more valued as tools for cyber security, a method to classify or rate them becomes essential. Yet while a universal criteria for measuring cyber ranges in terms of their capability maturity levels becomes more critical, there are currently very limited resources for researchers aiming to perform this kind of work. For this reason, this work proposes and describes a CMM, designed to give organisations the ability to benchmark the capability maturity of a given cyber range. This research adopted a synthesised approach to the development of a CMM, grounded in prior research and focused on the production of a conceptual model that provides a useful level of abstraction. In order to achieve this goal, the core capability elements of a cyber range are defined with their relative importance, allowing for the development of a proposed classification cyber range levels. An analysis of data gathered during the course of an expert review, together with other research, further supported the development of the conceptual model. In the context of cyber range capability, classification will include the ability of the cyber range to perform its functions optimally with different core capability elements, focusing on the Measurement of Capability (MoC) with its elements, namely effect, performance and threat ability. Cyber range maturity can evolve over time and can be defined through the Measurement of Maturity (MoM) with its elements, namely people, processes, technology. The combination of these measurements utilising the CMM for a CR determines the capability maturity level of a CR. The primary outcome of this research is the proposed level-based CMM framework for a cyber range, developed using adopted and synthesised CMMs, the analysis of an expert review, and the mapping of the results.
- Full Text:
- Date Issued: 2020
- Authors: Aschmann, Michael Joseph
- Date: 2020
- Subjects: Computer software -- Development , Computer security
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/163142 , vital:41013
- Description: This work describes research undertaken towards the development of a Capability Maturity Model (CMM) for Cyber Ranges (CRs) focused on cyber security. Global cyber security needs are on the rise, and the need for attribution within the cyber domain is of particular concern. This has prompted major efforts to enhance cyber capabilities within organisations to increase their total cyber resilience posture. These efforts include, but are not limited to, the testing of computational devices, networks, and applications, and cyber skills training focused on prevention, detection and cyber attack response. A cyber range allows for the testing of the computational environment. By developing cyber events within a confined virtual or sand-boxed cyber environment, a cyber range can prepare the next generation of cyber security specialists to handle a variety of potential cyber attacks. Cyber ranges have different purposes, each designed to fulfil a different computational testing and cyber training goal; consequently, cyber ranges can vary greatly in the level of variety, capability, maturity and complexity. As cyber ranges proliferate and become more and more valued as tools for cyber security, a method to classify or rate them becomes essential. Yet while a universal criteria for measuring cyber ranges in terms of their capability maturity levels becomes more critical, there are currently very limited resources for researchers aiming to perform this kind of work. For this reason, this work proposes and describes a CMM, designed to give organisations the ability to benchmark the capability maturity of a given cyber range. This research adopted a synthesised approach to the development of a CMM, grounded in prior research and focused on the production of a conceptual model that provides a useful level of abstraction. In order to achieve this goal, the core capability elements of a cyber range are defined with their relative importance, allowing for the development of a proposed classification cyber range levels. An analysis of data gathered during the course of an expert review, together with other research, further supported the development of the conceptual model. In the context of cyber range capability, classification will include the ability of the cyber range to perform its functions optimally with different core capability elements, focusing on the Measurement of Capability (MoC) with its elements, namely effect, performance and threat ability. Cyber range maturity can evolve over time and can be defined through the Measurement of Maturity (MoM) with its elements, namely people, processes, technology. The combination of these measurements utilising the CMM for a CR determines the capability maturity level of a CR. The primary outcome of this research is the proposed level-based CMM framework for a cyber range, developed using adopted and synthesised CMMs, the analysis of an expert review, and the mapping of the results.
- Full Text:
- Date Issued: 2020
An appraisal of secure, wireless grid-enabled data warehousing
- Authors: Seelo, Gaolathe
- Date: 2007
- Subjects: Data warehousing , Computer security
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9790 , http://hdl.handle.net/10948/602 , http://hdl.handle.net/10948/d1011700 , Data warehousing , Computer security
- Description: In most research, appropriate collections of data play a significant role in aiding decision-making processes. This is more critical if the data is being accessed across organisational barriers. Further, for the data to be mined and analysed efficiently, to aid decision-making processes, it must be harnessed in a suitably-structured fashion. There is, for example, a need to perform diverse data analyses and interpretation of structured (non-personal) HIV/AIDS patient-data from various quarters in South Africa. Although this data does exist, to some extent, it is autonomously owned and stored in disparate data storages, and not readily available to all interested parties. In order to put this data to meaningful use, it is imperative to integrate and store this data in a manner in which it can be better utilized by all those involved in the ontological field. This implies integration of (and hence, interoperability), and appropriate accessibility to, the information systems of the autonomous organizations providing data and data-processing. This is a typical problem-scenario for a Virtual Inter-Organisational Information System (VIOIS), proposed in this study. The VIOIS envisaged is a hypothetical, secure, Wireless Grid-enabled Data Warehouse (WGDW) that enables IOIS interaction, such as the storage and processing of HIV/AIDS patient-data to be utilized for HIV/AIDS-specific research. The proposed WDGW offers a methodical approach for arriving at such a collaborative (HIV/AIDS research) integrated system. The proposed WDGW is virtual community that consists mainly of data-providers, service-providers and information-consumers. The WGDW-basis resulted from systematic literaturesurvey that covered a variety of technologies and standards that support datastorage, data-management, computation and connectivity between virtual community members in Grid computing contexts. A Grid computing paradigm is proposed for data-storage, data management and computation in the WGDW. Informational or analytical processing will be enabled through data warehousing while connectivity will be attained wirelessly (for addressing the paucity of connectivity infrastructure in rural parts of developing countries, like South Africa).
- Full Text:
- Date Issued: 2007
- Authors: Seelo, Gaolathe
- Date: 2007
- Subjects: Data warehousing , Computer security
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9790 , http://hdl.handle.net/10948/602 , http://hdl.handle.net/10948/d1011700 , Data warehousing , Computer security
- Description: In most research, appropriate collections of data play a significant role in aiding decision-making processes. This is more critical if the data is being accessed across organisational barriers. Further, for the data to be mined and analysed efficiently, to aid decision-making processes, it must be harnessed in a suitably-structured fashion. There is, for example, a need to perform diverse data analyses and interpretation of structured (non-personal) HIV/AIDS patient-data from various quarters in South Africa. Although this data does exist, to some extent, it is autonomously owned and stored in disparate data storages, and not readily available to all interested parties. In order to put this data to meaningful use, it is imperative to integrate and store this data in a manner in which it can be better utilized by all those involved in the ontological field. This implies integration of (and hence, interoperability), and appropriate accessibility to, the information systems of the autonomous organizations providing data and data-processing. This is a typical problem-scenario for a Virtual Inter-Organisational Information System (VIOIS), proposed in this study. The VIOIS envisaged is a hypothetical, secure, Wireless Grid-enabled Data Warehouse (WGDW) that enables IOIS interaction, such as the storage and processing of HIV/AIDS patient-data to be utilized for HIV/AIDS-specific research. The proposed WDGW offers a methodical approach for arriving at such a collaborative (HIV/AIDS research) integrated system. The proposed WDGW is virtual community that consists mainly of data-providers, service-providers and information-consumers. The WGDW-basis resulted from systematic literaturesurvey that covered a variety of technologies and standards that support datastorage, data-management, computation and connectivity between virtual community members in Grid computing contexts. A Grid computing paradigm is proposed for data-storage, data management and computation in the WGDW. Informational or analytical processing will be enabled through data warehousing while connectivity will be attained wirelessly (for addressing the paucity of connectivity infrastructure in rural parts of developing countries, like South Africa).
- Full Text:
- Date Issued: 2007
Amber : a aero-interaction honeypot with distributed intelligence
- Authors: Schoeman, Adam
- Date: 2015
- Subjects: Security systems -- Security measures , Computer viruses , Intrusion detection systems (Computer security) , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4716 , http://hdl.handle.net/10962/d1017938
- Description: For the greater part, security controls are based on the principle of Decision through Detection (DtD). The exception to this is a honeypot, which analyses interactions between a third party and itself, while occupying a piece of unused information space. As honeypots are not located on productive information resources, any interaction with it can be assumed to be non-productive. This allows the honeypot to make decisions based simply on the presence of data, rather than on the behaviour of the data. But due to limited resources in human capital, honeypots’ uptake in the South African market has been underwhelming. Amber attempts to change this by offering a zero-interaction security system, which will use the honeypot approach of decision through Presence (DtP) to generate a blacklist of third parties, which can be passed on to a network enforcer. Empirical testing has proved the usefulness of this alternative and low cost approach in defending networks. The functionality of the system was also extended by installing nodes in different geographical locations, and streaming their detections into the central Amber hive.
- Full Text:
- Date Issued: 2015
- Authors: Schoeman, Adam
- Date: 2015
- Subjects: Security systems -- Security measures , Computer viruses , Intrusion detection systems (Computer security) , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4716 , http://hdl.handle.net/10962/d1017938
- Description: For the greater part, security controls are based on the principle of Decision through Detection (DtD). The exception to this is a honeypot, which analyses interactions between a third party and itself, while occupying a piece of unused information space. As honeypots are not located on productive information resources, any interaction with it can be assumed to be non-productive. This allows the honeypot to make decisions based simply on the presence of data, rather than on the behaviour of the data. But due to limited resources in human capital, honeypots’ uptake in the South African market has been underwhelming. Amber attempts to change this by offering a zero-interaction security system, which will use the honeypot approach of decision through Presence (DtP) to generate a blacklist of third parties, which can be passed on to a network enforcer. Empirical testing has proved the usefulness of this alternative and low cost approach in defending networks. The functionality of the system was also extended by installing nodes in different geographical locations, and streaming their detections into the central Amber hive.
- Full Text:
- Date Issued: 2015
Limiting vulnerability exposure through effective patch management: threat mitigation through vulnerability remediation
- Authors: White, Dominic Stjohn Dolin
- Date: 2007 , 2007-02-08
- Subjects: Computer networks -- Security measures , Computer viruses , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4629 , http://hdl.handle.net/10962/d1006510 , Computer networks -- Security measures , Computer viruses , Computer security
- Description: This document aims to provide a complete discussion on vulnerability and patch management. The first chapters look at the trends relating to vulnerabilities, exploits, attacks and patches. These trends describe the drivers of patch and vulnerability management and situate the discussion in the current security climate. The following chapters then aim to present both policy and technical solutions to the problem. The policies described lay out a comprehensive set of steps that can be followed by any organisation to implement their own patch management policy, including practical advice on integration with other policies, managing risk, identifying vulnerability, strategies for reducing downtime and generating metrics to measure progress. Having covered the steps that can be taken by users, a strategy describing how best a vendor should implement a related patch release policy is provided. An argument is made that current monthly patch release schedules are inadequate to allow users to most effectively and timeously mitigate vulnerabilities. The final chapters discuss the technical aspect of automating parts of the policies described. In particular the concept of 'defense in depth' is used to discuss additional strategies for 'buying time' during the patch process. The document then goes on to conclude that in the face of increasing malicious activity and more complex patching, solid frameworks such as those provided in this document are required to ensure an organisation can fully manage the patching process. However, more research is required to fully understand vulnerabilities and exploits. In particular more attention must be paid to threats, as little work as been done to fully understand threat-agent capabilities and activities from a day to day basis. , TeX output 2007.02.08:2212 , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Date Issued: 2007
- Authors: White, Dominic Stjohn Dolin
- Date: 2007 , 2007-02-08
- Subjects: Computer networks -- Security measures , Computer viruses , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4629 , http://hdl.handle.net/10962/d1006510 , Computer networks -- Security measures , Computer viruses , Computer security
- Description: This document aims to provide a complete discussion on vulnerability and patch management. The first chapters look at the trends relating to vulnerabilities, exploits, attacks and patches. These trends describe the drivers of patch and vulnerability management and situate the discussion in the current security climate. The following chapters then aim to present both policy and technical solutions to the problem. The policies described lay out a comprehensive set of steps that can be followed by any organisation to implement their own patch management policy, including practical advice on integration with other policies, managing risk, identifying vulnerability, strategies for reducing downtime and generating metrics to measure progress. Having covered the steps that can be taken by users, a strategy describing how best a vendor should implement a related patch release policy is provided. An argument is made that current monthly patch release schedules are inadequate to allow users to most effectively and timeously mitigate vulnerabilities. The final chapters discuss the technical aspect of automating parts of the policies described. In particular the concept of 'defense in depth' is used to discuss additional strategies for 'buying time' during the patch process. The document then goes on to conclude that in the face of increasing malicious activity and more complex patching, solid frameworks such as those provided in this document are required to ensure an organisation can fully manage the patching process. However, more research is required to fully understand vulnerabilities and exploits. In particular more attention must be paid to threats, as little work as been done to fully understand threat-agent capabilities and activities from a day to day basis. , TeX output 2007.02.08:2212 , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Date Issued: 2007
Pseudo-random access compressed archive for security log data
- Authors: Radley, Johannes Jurgens
- Date: 2015
- Subjects: Computer security , Information storage and retrieval systems , Data compression (Computer science)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4723 , http://hdl.handle.net/10962/d1020019
- Description: We are surrounded by an increasing number of devices and applications that produce a huge quantity of machine generated data. Almost all the machine data contains some element of security information that can be used to discover, monitor and investigate security events.The work proposes a pseudo-random access compressed storage method for log data to be used with an information retrieval system that in turn provides the ability to search and correlate log data and the corresponding events. We explain the method for converting log files into distinct events and storing the events in a compressed file. This yields an entry identifier for each log entry that provides a pointer that can be used by indexing methods. The research also evaluates the compression performance penalties encountered by using this storage system, including decreased compression ratio, as well as increased compression and decompression times.
- Full Text:
- Date Issued: 2015
- Authors: Radley, Johannes Jurgens
- Date: 2015
- Subjects: Computer security , Information storage and retrieval systems , Data compression (Computer science)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4723 , http://hdl.handle.net/10962/d1020019
- Description: We are surrounded by an increasing number of devices and applications that produce a huge quantity of machine generated data. Almost all the machine data contains some element of security information that can be used to discover, monitor and investigate security events.The work proposes a pseudo-random access compressed storage method for log data to be used with an information retrieval system that in turn provides the ability to search and correlate log data and the corresponding events. We explain the method for converting log files into distinct events and storing the events in a compressed file. This yields an entry identifier for each log entry that provides a pointer that can be used by indexing methods. The research also evaluates the compression performance penalties encountered by using this storage system, including decreased compression ratio, as well as increased compression and decompression times.
- Full Text:
- Date Issued: 2015
Deploying DNSSEC in islands of security
- Authors: Murisa, Wesley Vengayi
- Date: 2013 , 2013-03-31
- Subjects: Internet domain names , Computer security , Computer network protocols , Computer security -- Africa
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4577 , http://hdl.handle.net/10962/d1003053 , Internet domain names , Computer security , Computer network protocols , Computer security -- Africa
- Description: The Domain Name System (DNS), a name resolution protocol is one of the vulnerable network protocols that has been subjected to many security attacks such as cache poisoning, denial of service and the 'Kaminsky' spoofing attack. When DNS was designed, security was not incorporated into its design. The DNS Security Extensions (DNSSEC) provides security to the name resolution process by using public key cryptosystems. Although DNSSEC has backward compatibility with unsecured zones, it only offers security to clients when communicating with security aware zones. Widespread deployment of DNSSEC is therefore necessary to secure the name resolution process and provide security to the Internet. Only a few Top Level Domains (TLD's) have deployed DNSSEC, this inherently makes it difficult for their sub-domains to implement the security extensions to the DNS. This study analyses mechanisms that can be used by domains in islands of security to deploy DNSSEC so that the name resolution process can be secured in two specific cases where either the TLD is not signed or the domain registrar is not able to support signed domains. The DNS client side mechanisms evaluated in this study include web browser plug-ins, local validating resolvers and domain look-aside validation. The results of the study show that web browser plug-ins cannot work on their own without local validating resolvers. The web browser validators, however, proved to be useful in indicating to the user whether a domain has been validated or not. Local resolvers present a more secure option for Internet users who cannot trust the communication channel between their stub resolvers and remote name servers. However, they do not provide a way of showing the user whether a domain name has been correctly validated or not. Based on the results of the tests conducted, it is recommended that local validators be used with browser validators for visibility and improved security. On the DNS server side, Domain Look-aside Validation (DLV) presents a viable alternative for organizations in islands of security like most countries in Africa where only two country code Top Level Domains (ccTLD) have deployed DNSSEC. This research recommends use of DLV by corporates to provide DNS security to both internal and external users accessing their web based services. , LaTeX with hyperref package , pdfTeX-1.40.10
- Full Text:
- Date Issued: 2013
- Authors: Murisa, Wesley Vengayi
- Date: 2013 , 2013-03-31
- Subjects: Internet domain names , Computer security , Computer network protocols , Computer security -- Africa
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4577 , http://hdl.handle.net/10962/d1003053 , Internet domain names , Computer security , Computer network protocols , Computer security -- Africa
- Description: The Domain Name System (DNS), a name resolution protocol is one of the vulnerable network protocols that has been subjected to many security attacks such as cache poisoning, denial of service and the 'Kaminsky' spoofing attack. When DNS was designed, security was not incorporated into its design. The DNS Security Extensions (DNSSEC) provides security to the name resolution process by using public key cryptosystems. Although DNSSEC has backward compatibility with unsecured zones, it only offers security to clients when communicating with security aware zones. Widespread deployment of DNSSEC is therefore necessary to secure the name resolution process and provide security to the Internet. Only a few Top Level Domains (TLD's) have deployed DNSSEC, this inherently makes it difficult for their sub-domains to implement the security extensions to the DNS. This study analyses mechanisms that can be used by domains in islands of security to deploy DNSSEC so that the name resolution process can be secured in two specific cases where either the TLD is not signed or the domain registrar is not able to support signed domains. The DNS client side mechanisms evaluated in this study include web browser plug-ins, local validating resolvers and domain look-aside validation. The results of the study show that web browser plug-ins cannot work on their own without local validating resolvers. The web browser validators, however, proved to be useful in indicating to the user whether a domain has been validated or not. Local resolvers present a more secure option for Internet users who cannot trust the communication channel between their stub resolvers and remote name servers. However, they do not provide a way of showing the user whether a domain name has been correctly validated or not. Based on the results of the tests conducted, it is recommended that local validators be used with browser validators for visibility and improved security. On the DNS server side, Domain Look-aside Validation (DLV) presents a viable alternative for organizations in islands of security like most countries in Africa where only two country code Top Level Domains (ccTLD) have deployed DNSSEC. This research recommends use of DLV by corporates to provide DNS security to both internal and external users accessing their web based services. , LaTeX with hyperref package , pdfTeX-1.40.10
- Full Text:
- Date Issued: 2013
An exploration into the use of webinjects by financial malware
- Authors: Forrester, Jock Ingram
- Date: 2014
- Subjects: Malware (Computer software) -- Analysis , Internet fraud , Computer crimes , Computer security , Electronic commerce
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4697 , http://hdl.handle.net/10962/d1012079 , Malware (Computer software) -- Analysis , Internet fraud , Computer crimes , Computer security , Electronic commerce
- Description: As the number of computing devices connected to the Internet increases and the Internet itself becomes more pervasive, so does the opportunity for criminals to use these devices in cybercrimes. Supporting the increase in cybercrime is the growth and maturity of the digital underground economy with strong links to its more visible and physical counterpart. The digital underground economy provides software and related services to equip the entrepreneurial cybercriminal with the appropriate skills and required tools. Financial malware, particularly the capability for injection of code into web browsers, has become one of the more profitable cybercrime tool sets due to its versatility and adaptability when targeting clients of institutions with an online presence, both in and outside of the financial industry. There are numerous families of financial malware available for use, with perhaps the most prevalent being Zeus and SpyEye. Criminals create (or purchase) and grow botnets of computing devices infected with financial malware that has been configured to attack clients of certain websites. In the research data set there are 483 configuration files containing approximately 40 000 webinjects that were captured from various financial malware botnets between October 2010 and June 2012. They were processed and analysed to determine the methods used by criminals to defraud either the user of the computing device, or the institution of which the user is a client. The configuration files contain the injection code that is executed in the web browser to create a surrogate interface, which is then used by the criminal to interact with the user and institution in order to commit fraud. Demographics on the captured data set are presented and case studies are documented based on the various methods used to defraud and bypass financial security controls across multiple industries. The case studies cover techniques used in social engineering, bypassing security controls and automated transfers.
- Full Text:
- Date Issued: 2014
- Authors: Forrester, Jock Ingram
- Date: 2014
- Subjects: Malware (Computer software) -- Analysis , Internet fraud , Computer crimes , Computer security , Electronic commerce
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4697 , http://hdl.handle.net/10962/d1012079 , Malware (Computer software) -- Analysis , Internet fraud , Computer crimes , Computer security , Electronic commerce
- Description: As the number of computing devices connected to the Internet increases and the Internet itself becomes more pervasive, so does the opportunity for criminals to use these devices in cybercrimes. Supporting the increase in cybercrime is the growth and maturity of the digital underground economy with strong links to its more visible and physical counterpart. The digital underground economy provides software and related services to equip the entrepreneurial cybercriminal with the appropriate skills and required tools. Financial malware, particularly the capability for injection of code into web browsers, has become one of the more profitable cybercrime tool sets due to its versatility and adaptability when targeting clients of institutions with an online presence, both in and outside of the financial industry. There are numerous families of financial malware available for use, with perhaps the most prevalent being Zeus and SpyEye. Criminals create (or purchase) and grow botnets of computing devices infected with financial malware that has been configured to attack clients of certain websites. In the research data set there are 483 configuration files containing approximately 40 000 webinjects that were captured from various financial malware botnets between October 2010 and June 2012. They were processed and analysed to determine the methods used by criminals to defraud either the user of the computing device, or the institution of which the user is a client. The configuration files contain the injection code that is executed in the web browser to create a surrogate interface, which is then used by the criminal to interact with the user and institution in order to commit fraud. Demographics on the captured data set are presented and case studies are documented based on the various methods used to defraud and bypass financial security controls across multiple industries. The case studies cover techniques used in social engineering, bypassing security controls and automated transfers.
- Full Text:
- Date Issued: 2014
Information security awareness: generic content, tools and techniques
- Authors: Mauwa, Hope
- Date: 2007
- Subjects: Computer security , Data protection , Computers -- Safety measures , Information technology -- Security measures
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9733 , http://hdl.handle.net/10948/560 , Computer security , Data protection , Computers -- Safety measures , Information technology -- Security measures
- Description: In today’s computing environment, awareness programmes play a much more important role in organizations’ complete information security programmes. Information security awareness programmes are there to change behaviour or reinforce good security practices, and provide a baseline of security knowledge for all information users. Security awareness is a learning process, which changes individual and organizational attitudes and perceptions so that the importance of security and the adverse consequences of its failure are realized. Therefore, with proper awareness, employees become the most effective layer in an organization’s security defence. With the important role that these awareness programmes play in organizations’ complete information security programmes, it is a must that all organizations that are serious about information security must implement it. But though awareness programmes have become increasing important, the level of awareness in most organizations is still low. It seems that the current approach of developing these programmes does not satisfy the needs of most organizations. Therefore, another approach, which tries to meet the needs of most organizations, is proposed in this project as part of the solution of raising the level of awareness programmes in organizations.
- Full Text:
- Date Issued: 2007
- Authors: Mauwa, Hope
- Date: 2007
- Subjects: Computer security , Data protection , Computers -- Safety measures , Information technology -- Security measures
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9733 , http://hdl.handle.net/10948/560 , Computer security , Data protection , Computers -- Safety measures , Information technology -- Security measures
- Description: In today’s computing environment, awareness programmes play a much more important role in organizations’ complete information security programmes. Information security awareness programmes are there to change behaviour or reinforce good security practices, and provide a baseline of security knowledge for all information users. Security awareness is a learning process, which changes individual and organizational attitudes and perceptions so that the importance of security and the adverse consequences of its failure are realized. Therefore, with proper awareness, employees become the most effective layer in an organization’s security defence. With the important role that these awareness programmes play in organizations’ complete information security programmes, it is a must that all organizations that are serious about information security must implement it. But though awareness programmes have become increasing important, the level of awareness in most organizations is still low. It seems that the current approach of developing these programmes does not satisfy the needs of most organizations. Therefore, another approach, which tries to meet the needs of most organizations, is proposed in this project as part of the solution of raising the level of awareness programmes in organizations.
- Full Text:
- Date Issued: 2007
A model for cultivating resistance to social engineering attacks
- Authors: Jansson, Kenny
- Date: 2011
- Subjects: Computer security , Data protection , Human-computer interaction
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9744 , http://hdl.handle.net/10948/1588 , Computer security , Data protection , Human-computer interaction
- Description: The human being is commonly considered as being the weakest link in information security. Subsequently, as information is one of the most critical assets in an organization today, it is essential that the human element is considered in deployments of information security countermeasures. However, the human element is often neglected in this regard. Consequently, many criminals are now targeting the user directly to obtain sensitive information instead of spending days or even months trying to hack through systems. Some criminals are targeting users by utilizing various social engineering techniques to deceive the user into disclosing information. For this reason, the users of the Internet and ICT-related technologies are nowadays very vulnerable to various social engineering attacks. As a contribution to increase users’ social engineering awareness, a model – called SERUM – was devised. SERUM aims to cultivate social engineering resistance within a community through exposing the users of the community to ‘fake’ social engineering attacks. The users that react incorrectly to these attacks are instantly notified and requested to participate in an online social engineering awareness program. Thus, users are educated on-demand. The model was implemented as a software system and was utilized to conduct a phishing exercise on all the students of the Nelson Mandela Metropolitan University. The aim of the phishing exercise was to determine whether SERUM is effective in cultivating social engineering resistant behaviour within a community. This phishing exercise proved to be successful and positive results emanated. This indicated that a model like SERUM can indeed be used to educate users regarding phishing attacks.
- Full Text:
- Date Issued: 2011
- Authors: Jansson, Kenny
- Date: 2011
- Subjects: Computer security , Data protection , Human-computer interaction
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9744 , http://hdl.handle.net/10948/1588 , Computer security , Data protection , Human-computer interaction
- Description: The human being is commonly considered as being the weakest link in information security. Subsequently, as information is one of the most critical assets in an organization today, it is essential that the human element is considered in deployments of information security countermeasures. However, the human element is often neglected in this regard. Consequently, many criminals are now targeting the user directly to obtain sensitive information instead of spending days or even months trying to hack through systems. Some criminals are targeting users by utilizing various social engineering techniques to deceive the user into disclosing information. For this reason, the users of the Internet and ICT-related technologies are nowadays very vulnerable to various social engineering attacks. As a contribution to increase users’ social engineering awareness, a model – called SERUM – was devised. SERUM aims to cultivate social engineering resistance within a community through exposing the users of the community to ‘fake’ social engineering attacks. The users that react incorrectly to these attacks are instantly notified and requested to participate in an online social engineering awareness program. Thus, users are educated on-demand. The model was implemented as a software system and was utilized to conduct a phishing exercise on all the students of the Nelson Mandela Metropolitan University. The aim of the phishing exercise was to determine whether SERUM is effective in cultivating social engineering resistant behaviour within a community. This phishing exercise proved to be successful and positive results emanated. This indicated that a model like SERUM can indeed be used to educate users regarding phishing attacks.
- Full Text:
- Date Issued: 2011
A study of malicious software on the macOS operating system
- Authors: Regensberg, Mark Alan
- Date: 2019
- Subjects: Malware (Computer software) , Computer security , Computer viruses , Mac OS
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92302 , vital:30701
- Description: Much of the published malware research begins with a common refrain: the cost, quantum and complexity of threats are increasing, and research and practice should prioritise efforts to automate and reduce times to detect and prevent malware, while improving the consistency of categories and taxonomies applied to modern malware. Existing work related to malware targeting Apple's macOS platform has not been spared this approach, although limited research has been conducted on the true nature of threats faced by users of the operating system. While macOS focused research available consistently notes an increase in macOS users, devices and ultimately in threats, an opportunity exists to understand the real nature of threats faced by macOS users and suggest potential avenues for future work. This research provides a view of the current state of macOS malware by analysing and exploring a dataset of malware detections on macOS endpoints captured over a period of eleven months by an anti-malware software vendor. The dataset is augmented with malware information provided by the widely used Virus. Total service, as well as the application of prior automated malware categorisation work, AVClass to categorise and SSDeep to cluster and report on observed data. With Windows and Android platforms frequently in the spotlight as targets for highly disruptive malware like botnets, ransomware and cryptominers, research and intuition seem to suggest the threat of malware on this increasingly popular platform should be growing and evolving accordingly. Findings suggests that the direction and nature of growth and evolution may not be entirely as clear as industry reports suggest. Adware and Potentially Unwanted Applications (PUAs) make up the vast majority of the detected threats, with remote access trojans (RATs), ransomware and cryptocurrency miners comprising a relatively small proportion of the detected malware. This provides a number of avenues for potential future work to compare and contrast with research on other platforms, as well as identification of key factors that may influence its growth in the future.
- Full Text:
- Date Issued: 2019
- Authors: Regensberg, Mark Alan
- Date: 2019
- Subjects: Malware (Computer software) , Computer security , Computer viruses , Mac OS
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92302 , vital:30701
- Description: Much of the published malware research begins with a common refrain: the cost, quantum and complexity of threats are increasing, and research and practice should prioritise efforts to automate and reduce times to detect and prevent malware, while improving the consistency of categories and taxonomies applied to modern malware. Existing work related to malware targeting Apple's macOS platform has not been spared this approach, although limited research has been conducted on the true nature of threats faced by users of the operating system. While macOS focused research available consistently notes an increase in macOS users, devices and ultimately in threats, an opportunity exists to understand the real nature of threats faced by macOS users and suggest potential avenues for future work. This research provides a view of the current state of macOS malware by analysing and exploring a dataset of malware detections on macOS endpoints captured over a period of eleven months by an anti-malware software vendor. The dataset is augmented with malware information provided by the widely used Virus. Total service, as well as the application of prior automated malware categorisation work, AVClass to categorise and SSDeep to cluster and report on observed data. With Windows and Android platforms frequently in the spotlight as targets for highly disruptive malware like botnets, ransomware and cryptominers, research and intuition seem to suggest the threat of malware on this increasingly popular platform should be growing and evolving accordingly. Findings suggests that the direction and nature of growth and evolution may not be entirely as clear as industry reports suggest. Adware and Potentially Unwanted Applications (PUAs) make up the vast majority of the detected threats, with remote access trojans (RATs), ransomware and cryptocurrency miners comprising a relatively small proportion of the detected malware. This provides a number of avenues for potential future work to compare and contrast with research on other platforms, as well as identification of key factors that may influence its growth in the future.
- Full Text:
- Date Issued: 2019
Data-centric security : towards a utopian model for protecting corporate data on mobile devices
- Authors: Mayisela, Simphiwe Hector
- Date: 2014
- Subjects: Computer security , Computer networks -- Security measures , Business enterprises -- Computer networks -- Security measures , Mobile computing -- Security measures , Mobile communication systems -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4688 , http://hdl.handle.net/10962/d1011094 , Computer security , Computer networks -- Security measures , Business enterprises -- Computer networks -- Security measures , Mobile computing -- Security measures , Mobile communication systems -- Security measures
- Description: Data-centric security is significant in understanding, assessing and mitigating the various risks and impacts of sharing information outside corporate boundaries. Information generally leaves corporate boundaries through mobile devices. Mobile devices continue to evolve as multi-functional tools for everyday life, surpassing their initial intended use. This added capability and increasingly extensive use of mobile devices does not come without a degree of risk - hence the need to guard and protect information as it exists beyond the corporate boundaries and throughout its lifecycle. Literature on existing models crafted to protect data, rather than infrastructure in which the data resides, is reviewed. Technologies that organisations have implemented to adopt the data-centric model are studied. A utopian model that takes into account the shortcomings of existing technologies and deficiencies of common theories is proposed. Two sets of qualitative studies are reported; the first is a preliminary online survey to assess the ubiquity of mobile devices and extent of technology adoption towards implementation of data-centric model; and the second comprises of a focus survey and expert interviews pertaining on technologies that organisations have implemented to adopt the data-centric model. The latter study revealed insufficient data at the time of writing for the results to be statistically significant; however; indicative trends supported the assertions documented in the literature review. The question that this research answers is whether or not current technology implementations designed to mitigate risks from mobile devices, actually address business requirements. This research question, answered through these two sets qualitative studies, discovered inconsistencies between the technology implementations and business requirements. The thesis concludes by proposing a realistic model, based on the outcome of the qualitative study, which bridges the gap between the technology implementations and business requirements. Future work which could perhaps be conducted in light of the findings and the comments from this research is also considered.
- Full Text:
- Date Issued: 2014
- Authors: Mayisela, Simphiwe Hector
- Date: 2014
- Subjects: Computer security , Computer networks -- Security measures , Business enterprises -- Computer networks -- Security measures , Mobile computing -- Security measures , Mobile communication systems -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4688 , http://hdl.handle.net/10962/d1011094 , Computer security , Computer networks -- Security measures , Business enterprises -- Computer networks -- Security measures , Mobile computing -- Security measures , Mobile communication systems -- Security measures
- Description: Data-centric security is significant in understanding, assessing and mitigating the various risks and impacts of sharing information outside corporate boundaries. Information generally leaves corporate boundaries through mobile devices. Mobile devices continue to evolve as multi-functional tools for everyday life, surpassing their initial intended use. This added capability and increasingly extensive use of mobile devices does not come without a degree of risk - hence the need to guard and protect information as it exists beyond the corporate boundaries and throughout its lifecycle. Literature on existing models crafted to protect data, rather than infrastructure in which the data resides, is reviewed. Technologies that organisations have implemented to adopt the data-centric model are studied. A utopian model that takes into account the shortcomings of existing technologies and deficiencies of common theories is proposed. Two sets of qualitative studies are reported; the first is a preliminary online survey to assess the ubiquity of mobile devices and extent of technology adoption towards implementation of data-centric model; and the second comprises of a focus survey and expert interviews pertaining on technologies that organisations have implemented to adopt the data-centric model. The latter study revealed insufficient data at the time of writing for the results to be statistically significant; however; indicative trends supported the assertions documented in the literature review. The question that this research answers is whether or not current technology implementations designed to mitigate risks from mobile devices, actually address business requirements. This research question, answered through these two sets qualitative studies, discovered inconsistencies between the technology implementations and business requirements. The thesis concludes by proposing a realistic model, based on the outcome of the qualitative study, which bridges the gap between the technology implementations and business requirements. Future work which could perhaps be conducted in light of the findings and the comments from this research is also considered.
- Full Text:
- Date Issued: 2014
WSP3: a web service model for personal privacy protection
- Authors: Ophoff, Jacobus Albertus
- Date: 2003
- Subjects: Data protection , Computer security , Privacy, Right of
- Language: English
- Type: Thesis , Masters , MTech (Information Technology)
- Identifier: vital:10798 , http://hdl.handle.net/10948/272 , Data protection , Computer security , Privacy, Right of
- Description: The prevalent use of the Internet not only brings with it numerous advantages, but also some drawbacks. The biggest of these problems is the threat to the individual’s personal privacy. This privacy issue is playing a growing role with respect to technological advancements. While new service-based technologies are considerably increasing the scope of information flow, the cost is a loss of control over personal information and therefore privacy. Existing privacy protection measures might fail to provide effective privacy protection in these new environments. This dissertation focuses on the use of new technologies to improve the levels of personal privacy. In this regard the WSP3 (Web Service Model for Personal Privacy Protection) model is formulated. This model proposes a privacy protection scheme using Web Services. Having received tremendous industry backing, Web Services is a very topical technology, promising much in the evolution of the Internet. In our society privacy is highly valued and a very important issue. Protecting personal privacy in environments using new technologies is crucial for their future success. These facts, combined with the detail that the WSP3 model focusses on Web Service environments, lead to the following realizations for the model: The WSP3 model provides users with control over their personal information and allows them to express their desired level of privacy. Parties requiring access to a user’s information are explicitly defined by the user, as well as the information available to them. The WSP3 model utilizes a Web Services architecture to provide privacy protection. In addition, it integrates security techniques, such as cryptography, into the architecture as required. The WSP3 model integrates with current standards to maintain their benefits. This allows the implementation of the model in any environment supporting these base technologies. In addition, the research involves the development of a prototype according to the model. This prototype serves to present a proof-of-concept by illustrating the WSP3 model and all the technologies involved. The WSP3 model gives users control over their privacy and allows everyone to decide their own level of protection. By incorporating Web Services, the model also shows how new technologies can be used to offer solutions to existing problem areas.
- Full Text:
- Date Issued: 2003
- Authors: Ophoff, Jacobus Albertus
- Date: 2003
- Subjects: Data protection , Computer security , Privacy, Right of
- Language: English
- Type: Thesis , Masters , MTech (Information Technology)
- Identifier: vital:10798 , http://hdl.handle.net/10948/272 , Data protection , Computer security , Privacy, Right of
- Description: The prevalent use of the Internet not only brings with it numerous advantages, but also some drawbacks. The biggest of these problems is the threat to the individual’s personal privacy. This privacy issue is playing a growing role with respect to technological advancements. While new service-based technologies are considerably increasing the scope of information flow, the cost is a loss of control over personal information and therefore privacy. Existing privacy protection measures might fail to provide effective privacy protection in these new environments. This dissertation focuses on the use of new technologies to improve the levels of personal privacy. In this regard the WSP3 (Web Service Model for Personal Privacy Protection) model is formulated. This model proposes a privacy protection scheme using Web Services. Having received tremendous industry backing, Web Services is a very topical technology, promising much in the evolution of the Internet. In our society privacy is highly valued and a very important issue. Protecting personal privacy in environments using new technologies is crucial for their future success. These facts, combined with the detail that the WSP3 model focusses on Web Service environments, lead to the following realizations for the model: The WSP3 model provides users with control over their personal information and allows them to express their desired level of privacy. Parties requiring access to a user’s information are explicitly defined by the user, as well as the information available to them. The WSP3 model utilizes a Web Services architecture to provide privacy protection. In addition, it integrates security techniques, such as cryptography, into the architecture as required. The WSP3 model integrates with current standards to maintain their benefits. This allows the implementation of the model in any environment supporting these base technologies. In addition, the research involves the development of a prototype according to the model. This prototype serves to present a proof-of-concept by illustrating the WSP3 model and all the technologies involved. The WSP3 model gives users control over their privacy and allows everyone to decide their own level of protection. By incorporating Web Services, the model also shows how new technologies can be used to offer solutions to existing problem areas.
- Full Text:
- Date Issued: 2003
Remote fidelity of Container-Based Network Emulators
- Authors: Peach, Schalk Willem
- Date: 2021-10-29
- Subjects: Computer networks Security measures , Intrusion detection systems (Computer security) , Computer security , Host-based intrusion detection systems (Computer security) , Emulators (Computer programs) , Computer network protocols , Container-Based Network Emulators (CBNEs) , Network Experimentation Platforms (NEPs)
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10962/192141 , vital:45199
- Description: This thesis examines if Container-Based Network Emulators (CBNEs) are able to instantiate emulated nodes that provide sufficient realism to be used in information security experiments. The realism measure used is based on the information available from the point of view of a remote attacker. During the evaluation of a Container-Based Network Emulator (CBNE) as a platform to replicate production networks for information security experiments, it was observed that nmap fingerprinting returned Operating System (OS) family and version results inconsistent with that of the host Operating System (OS). CBNEs utilise Linux namespaces, the technology used for containerisation, to instantiate \emulated" hosts for experimental networks. Linux containers partition resources of the host OS to create lightweight virtual machines that share a single OS kernel. As all emulated hosts share the same kernel in a CBNE network, there is a reasonable expectation that the fingerprints of the host OS and emulated hosts should be the same. Based on how CBNEs instantiate emulated networks and that fingerprinting returned inconsistent results, it was hypothesised that the technologies used to construct CBNEs are capable of influencing fingerprints generated by utilities such as nmap. It was predicted that hosts emulated using different CBNEs would show deviations in remotely generated fingerprints when compared to fingerprints generated for the host OS. An experimental network consisting of two emulated hosts and a Layer 2 switch was instantiated on multiple CBNEs using the same host OS. Active and passive fingerprinting was conducted between the emulated hosts to generate fingerprints and OS family and version matches. Passive fingerprinting failed to produce OS family and version matches as the fingerprint databases for these utilities are no longer maintained. For active fingerprinting the OS family results were consistent between tested systems and the host OS, though OS version results reported was inconsistent. A comparison of the generated fingerprints revealed that for certain CBNEs fingerprint features related to network stack optimisations of the host OS deviated from other CBNEs and the host OS. The hypothesis that CBNEs can influence remotely generated fingerprints was partially confirmed. One CBNE system modified Linux kernel networking options, causing a deviation from fingerprints generated for other tested systems and the host OS. The hypothesis was also partially rejected as the technologies used by CBNEs do not influence the remote fidelity of emulated hosts. , Thesis (MSc) -- Faculty of Science, Computer Science, 2021
- Full Text:
- Date Issued: 2021-10-29
- Authors: Peach, Schalk Willem
- Date: 2021-10-29
- Subjects: Computer networks Security measures , Intrusion detection systems (Computer security) , Computer security , Host-based intrusion detection systems (Computer security) , Emulators (Computer programs) , Computer network protocols , Container-Based Network Emulators (CBNEs) , Network Experimentation Platforms (NEPs)
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10962/192141 , vital:45199
- Description: This thesis examines if Container-Based Network Emulators (CBNEs) are able to instantiate emulated nodes that provide sufficient realism to be used in information security experiments. The realism measure used is based on the information available from the point of view of a remote attacker. During the evaluation of a Container-Based Network Emulator (CBNE) as a platform to replicate production networks for information security experiments, it was observed that nmap fingerprinting returned Operating System (OS) family and version results inconsistent with that of the host Operating System (OS). CBNEs utilise Linux namespaces, the technology used for containerisation, to instantiate \emulated" hosts for experimental networks. Linux containers partition resources of the host OS to create lightweight virtual machines that share a single OS kernel. As all emulated hosts share the same kernel in a CBNE network, there is a reasonable expectation that the fingerprints of the host OS and emulated hosts should be the same. Based on how CBNEs instantiate emulated networks and that fingerprinting returned inconsistent results, it was hypothesised that the technologies used to construct CBNEs are capable of influencing fingerprints generated by utilities such as nmap. It was predicted that hosts emulated using different CBNEs would show deviations in remotely generated fingerprints when compared to fingerprints generated for the host OS. An experimental network consisting of two emulated hosts and a Layer 2 switch was instantiated on multiple CBNEs using the same host OS. Active and passive fingerprinting was conducted between the emulated hosts to generate fingerprints and OS family and version matches. Passive fingerprinting failed to produce OS family and version matches as the fingerprint databases for these utilities are no longer maintained. For active fingerprinting the OS family results were consistent between tested systems and the host OS, though OS version results reported was inconsistent. A comparison of the generated fingerprints revealed that for certain CBNEs fingerprint features related to network stack optimisations of the host OS deviated from other CBNEs and the host OS. The hypothesis that CBNEs can influence remotely generated fingerprints was partially confirmed. One CBNE system modified Linux kernel networking options, causing a deviation from fingerprints generated for other tested systems and the host OS. The hypothesis was also partially rejected as the technologies used by CBNEs do not influence the remote fidelity of emulated hosts. , Thesis (MSc) -- Faculty of Science, Computer Science, 2021
- Full Text:
- Date Issued: 2021-10-29
A model to address factors that could influence the information security behaviour of computing graduates
- Mabece, Thandolwethu, Thomson, Kerry-Lynn
- Authors: Mabece, Thandolwethu , Thomson, Kerry-Lynn
- Date: 2017
- Subjects: Information technology -- Security measures , Computer security , Cyber intelligence (Computer security)
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: http://hdl.handle.net/10948/7355 , vital:21339
- Description: The fact that information is ubiquitous throughout most modern organisations cannot be denied. Information is not merely used as an enabler in modern organisations today, but is also used to gain a competitive advantage over competitors. Thus, information has become one of the most important business assets. It is, therefore, imperative that organisations protect information assets as they would protect other business assets. This is typically achieved through implementing various security measures.Technological and procedural security measures are largely dependent on humans. However, the incorrect behaviour of humans poses a significant threat to the protection of these information assets. Thus, it is vital to understand how human behaviour may impact the protection of information assets. While the focus of much literature is on organisations, the focus of this research is on higher education institutions and the factors of information security, with a specific focus on influencing the information security behaviour of computing graduates. Typically, computing graduates would be employed in organisations in various careers such as software developers, network administrators, database administrators and information systems analysts. Employment in these careers means that they would be closely interacting with information assets and information systems. A real problem, as identified by this research, is that currently, many higher education institutions are not consciously doing enough to positively influence the information security behaviour of their computing graduates. This research presents a model to address various factors that could influence the information security behaviour of computing graduates. The aim of this model is to assist computing educators in influencing computing graduates to adopt more secure behaviour, such as security assurance behaviour. A literature review was conducted to identify the research problem. A number of theories such as the Theory of Planned Behaviour, Protection Motivation Theory and Social Cognitive Theory were identified as being relevant for this research as they provided a theoretical foundation for factors that could influence the information security behaviour of computing graduates. Additionally, a survey was conducted to gather the opinions and perceptions of computing educators relating to information security education in higher education institutions. Results indicated that information security is not pervasively integrated within the higher education institutions surveyed. Furthermore, results revealed that most computing students were perceived to not be behaving in a secure manner with regard to information security. This could negatively influence their information security behaviour as computing graduates employed within organisations. Computing educators therefore require assistance in influencing the information security behaviour of these computing students. The proposed model to provide this assistance was developed through argumentation and modelling.
- Full Text:
- Date Issued: 2017
- Authors: Mabece, Thandolwethu , Thomson, Kerry-Lynn
- Date: 2017
- Subjects: Information technology -- Security measures , Computer security , Cyber intelligence (Computer security)
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: http://hdl.handle.net/10948/7355 , vital:21339
- Description: The fact that information is ubiquitous throughout most modern organisations cannot be denied. Information is not merely used as an enabler in modern organisations today, but is also used to gain a competitive advantage over competitors. Thus, information has become one of the most important business assets. It is, therefore, imperative that organisations protect information assets as they would protect other business assets. This is typically achieved through implementing various security measures.Technological and procedural security measures are largely dependent on humans. However, the incorrect behaviour of humans poses a significant threat to the protection of these information assets. Thus, it is vital to understand how human behaviour may impact the protection of information assets. While the focus of much literature is on organisations, the focus of this research is on higher education institutions and the factors of information security, with a specific focus on influencing the information security behaviour of computing graduates. Typically, computing graduates would be employed in organisations in various careers such as software developers, network administrators, database administrators and information systems analysts. Employment in these careers means that they would be closely interacting with information assets and information systems. A real problem, as identified by this research, is that currently, many higher education institutions are not consciously doing enough to positively influence the information security behaviour of their computing graduates. This research presents a model to address various factors that could influence the information security behaviour of computing graduates. The aim of this model is to assist computing educators in influencing computing graduates to adopt more secure behaviour, such as security assurance behaviour. A literature review was conducted to identify the research problem. A number of theories such as the Theory of Planned Behaviour, Protection Motivation Theory and Social Cognitive Theory were identified as being relevant for this research as they provided a theoretical foundation for factors that could influence the information security behaviour of computing graduates. Additionally, a survey was conducted to gather the opinions and perceptions of computing educators relating to information security education in higher education institutions. Results indicated that information security is not pervasively integrated within the higher education institutions surveyed. Furthermore, results revealed that most computing students were perceived to not be behaving in a secure manner with regard to information security. This could negatively influence their information security behaviour as computing graduates employed within organisations. Computing educators therefore require assistance in influencing the information security behaviour of these computing students. The proposed model to provide this assistance was developed through argumentation and modelling.
- Full Text:
- Date Issued: 2017