An exploratory investigation into an Integrated Vulnerability and Patch Management Framework
- Authors: Carstens, Duane
- Date: 2021-04
- Subjects: Computer security , Computer security -- Management , Computer networks -- Security measures , Patch Management , Integrated Vulnerability
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/177940 , vital:42892
- Description: In the rapidly changing world of cybersecurity, the constant increase of vulnerabilities continues to be a prevalent issue for many organisations. Malicious actors are aware that most organisations cannot timeously patch known vulnerabilities and are ill-prepared to protect against newly created vulnerabilities where a signature or an available patch has not yet been created. Consequently, information security personnel face ongoing challenges to mitigate these risks. In this research, the problem of remediation in a world of increasing vulnerabilities is considered. The current paradigm of vulnerability and patch management is reviewed using a pragmatic approach to all associated variables of these services / practices and, as a result, what is working and what is not working in terms of remediation is understood. In addition to the analysis, a taxonomy is created to provide a graphical representation of all associated variables to vulnerability and patch management based on existing literature. Frameworks currently being utilised in the industry to create an effective engagement model between vulnerability and patch management services are considered. The link between quantifying a threat, vulnerability and consequence; what Microsoft has available for patching; and the action plan for resulting vulnerabilities is explored. Furthermore, the processes and means of communication between each of these services are investigated to ensure there is effective remediation of vulnerabilities, ultimately improving the security risk posture of an organisation. In order to effectively measure the security risk posture, progress is measured between each of these services through a single averaged measurement metric. The outcome of the research highlights influencing factors that impact successful vulnerability management, in line with identified themes from the research taxonomy. These influencing factors are however significantly undermined due to resources within the same organisations not having a clear and consistent understanding of their role, organisational capabilities and objectives for effective vulnerability and patch management within their organisations. , Thesis (MSc) -- Faculty of Science, Computer Science, 2021
- Full Text:
- Date Issued: 2021-04
- Authors: Carstens, Duane
- Date: 2021-04
- Subjects: Computer security , Computer security -- Management , Computer networks -- Security measures , Patch Management , Integrated Vulnerability
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/177940 , vital:42892
- Description: In the rapidly changing world of cybersecurity, the constant increase of vulnerabilities continues to be a prevalent issue for many organisations. Malicious actors are aware that most organisations cannot timeously patch known vulnerabilities and are ill-prepared to protect against newly created vulnerabilities where a signature or an available patch has not yet been created. Consequently, information security personnel face ongoing challenges to mitigate these risks. In this research, the problem of remediation in a world of increasing vulnerabilities is considered. The current paradigm of vulnerability and patch management is reviewed using a pragmatic approach to all associated variables of these services / practices and, as a result, what is working and what is not working in terms of remediation is understood. In addition to the analysis, a taxonomy is created to provide a graphical representation of all associated variables to vulnerability and patch management based on existing literature. Frameworks currently being utilised in the industry to create an effective engagement model between vulnerability and patch management services are considered. The link between quantifying a threat, vulnerability and consequence; what Microsoft has available for patching; and the action plan for resulting vulnerabilities is explored. Furthermore, the processes and means of communication between each of these services are investigated to ensure there is effective remediation of vulnerabilities, ultimately improving the security risk posture of an organisation. In order to effectively measure the security risk posture, progress is measured between each of these services through a single averaged measurement metric. The outcome of the research highlights influencing factors that impact successful vulnerability management, in line with identified themes from the research taxonomy. These influencing factors are however significantly undermined due to resources within the same organisations not having a clear and consistent understanding of their role, organisational capabilities and objectives for effective vulnerability and patch management within their organisations. , Thesis (MSc) -- Faculty of Science, Computer Science, 2021
- Full Text:
- Date Issued: 2021-04
Evolving a secure grid-enabled, distributed data warehouse : a standards-based perspective
- Authors: Li, Xiao-Yu
- Date: 2007
- Subjects: Computational grids (Computer systems) , Computer networks -- Security measures , Electronic data processing -- Distributed processing
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9738 , http://hdl.handle.net/10948/544 , Computational grids (Computer systems) , Computer networks -- Security measures , Electronic data processing -- Distributed processing
- Description: As digital data-collection has increased in scale and number, it becomes an important type of resource serving a wide community of researchers. Cross-institutional data-sharing and collaboration introduce a suitable approach to facilitate those research institutions that are suffering the lack of data and related IT infrastructures. Grid computing has become a widely adopted approach to enable cross-institutional resource-sharing and collaboration. It integrates a distributed and heterogeneous collection of locally managed users and resources. This project proposes a distributed data warehouse system, which uses Grid technology to enable data-access and integration, and collaborative operations across multi-distributed institutions in the context of HV/AIDS research. This study is based on wider research into OGSA-based Grid services architecture, comprising a data-analysis system which utilizes a data warehouse, data marts, and near-line operational database that are hosted by distributed institutions. Within this framework, specific patterns for collaboration, interoperability, resource virtualization and security are included. The heterogeneous and dynamic nature of the Grid environment introduces a number of security challenges. This study also concerns a set of particular security aspects, including PKI-based authentication, single sign-on, dynamic delegation, and attribute-based authorization. These mechanisms, as supported by the Globus Toolkit’s Grid Security Infrastructure, are used to enable interoperability and establish trust relationship between various security mechanisms and policies within different institutions; manage credentials; and ensure secure interactions.
- Full Text:
- Date Issued: 2007
- Authors: Li, Xiao-Yu
- Date: 2007
- Subjects: Computational grids (Computer systems) , Computer networks -- Security measures , Electronic data processing -- Distributed processing
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9738 , http://hdl.handle.net/10948/544 , Computational grids (Computer systems) , Computer networks -- Security measures , Electronic data processing -- Distributed processing
- Description: As digital data-collection has increased in scale and number, it becomes an important type of resource serving a wide community of researchers. Cross-institutional data-sharing and collaboration introduce a suitable approach to facilitate those research institutions that are suffering the lack of data and related IT infrastructures. Grid computing has become a widely adopted approach to enable cross-institutional resource-sharing and collaboration. It integrates a distributed and heterogeneous collection of locally managed users and resources. This project proposes a distributed data warehouse system, which uses Grid technology to enable data-access and integration, and collaborative operations across multi-distributed institutions in the context of HV/AIDS research. This study is based on wider research into OGSA-based Grid services architecture, comprising a data-analysis system which utilizes a data warehouse, data marts, and near-line operational database that are hosted by distributed institutions. Within this framework, specific patterns for collaboration, interoperability, resource virtualization and security are included. The heterogeneous and dynamic nature of the Grid environment introduces a number of security challenges. This study also concerns a set of particular security aspects, including PKI-based authentication, single sign-on, dynamic delegation, and attribute-based authorization. These mechanisms, as supported by the Globus Toolkit’s Grid Security Infrastructure, are used to enable interoperability and establish trust relationship between various security mechanisms and policies within different institutions; manage credentials; and ensure secure interactions.
- Full Text:
- Date Issued: 2007
Pursuing cost-effective secure network micro-segmentation
- Authors: Fürst, Mark Richard
- Date: 2018
- Subjects: Computer networks -- Security measures , Computer networks -- Access control , Firewalls (Computer security) , IPSec (Computer network protocol) , Network micro-segmentation
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/131106 , vital:36524
- Description: Traditional network segmentation allows discrete trust levels to be defined for different network segments, using physical firewalls or routers that control north-south traffic flowing between different interfaces. This technique reduces the attack surface area should an attacker breach one of the perimeter defences. However, east-west traffic flowing between endpoints within the same network segment does not pass through a firewall, and an attacker may be able to move laterally between endpoints within that segment. Network micro-segmentation was designed to address the challenge of controlling east-west traffic, and various solutions have been released with differing levels of capabilities and feature sets. These approaches range from simple network switch Access Control List based segmentation to complex hypervisor based software-defined security segments defined down to the individual workload, container or process level, and enforced via policy based security controls for each segment. Several commercial solutions for network micro-segmentation exist, but these are primarily focused on physical and cloud data centres, and are often accompanied by significant capital outlay and resource requirements. Given these constraints, this research determines whether existing tools provided with operating systems can be re-purposed to implement micro-segmentation and restrict east-west traffic within one or more network segments for a small-to-medium sized corporate network. To this end, a proof-of-concept lab environment was built with a heterogeneous mix of Windows and Linux virtual servers and workstations deployed in an Active Directory domain. The use of Group Policy Objects to deploy IPsec Server and Domain Isolation for controlling traffic between endpoints is examined, in conjunction with IPsec Authenticated Header and Encapsulating Security Payload modes as an additional layer of security. The outcome of the research shows that revisiting existing tools can enable organisations to implement an additional, cost-effective secure layer of defence in their network.
- Full Text:
- Date Issued: 2018
- Authors: Fürst, Mark Richard
- Date: 2018
- Subjects: Computer networks -- Security measures , Computer networks -- Access control , Firewalls (Computer security) , IPSec (Computer network protocol) , Network micro-segmentation
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/131106 , vital:36524
- Description: Traditional network segmentation allows discrete trust levels to be defined for different network segments, using physical firewalls or routers that control north-south traffic flowing between different interfaces. This technique reduces the attack surface area should an attacker breach one of the perimeter defences. However, east-west traffic flowing between endpoints within the same network segment does not pass through a firewall, and an attacker may be able to move laterally between endpoints within that segment. Network micro-segmentation was designed to address the challenge of controlling east-west traffic, and various solutions have been released with differing levels of capabilities and feature sets. These approaches range from simple network switch Access Control List based segmentation to complex hypervisor based software-defined security segments defined down to the individual workload, container or process level, and enforced via policy based security controls for each segment. Several commercial solutions for network micro-segmentation exist, but these are primarily focused on physical and cloud data centres, and are often accompanied by significant capital outlay and resource requirements. Given these constraints, this research determines whether existing tools provided with operating systems can be re-purposed to implement micro-segmentation and restrict east-west traffic within one or more network segments for a small-to-medium sized corporate network. To this end, a proof-of-concept lab environment was built with a heterogeneous mix of Windows and Linux virtual servers and workstations deployed in an Active Directory domain. The use of Group Policy Objects to deploy IPsec Server and Domain Isolation for controlling traffic between endpoints is examined, in conjunction with IPsec Authenticated Header and Encapsulating Security Payload modes as an additional layer of security. The outcome of the research shows that revisiting existing tools can enable organisations to implement an additional, cost-effective secure layer of defence in their network.
- Full Text:
- Date Issued: 2018
An information security governance model for industrial control systems
- Authors: Webster, Zynn
- Date: 2018
- Subjects: Computer networks -- Security measures , Data protection Computer security Business enterprises -- Computer networks -- Security measures
- Language: English
- Type: Thesis , Masters , MIT
- Identifier: http://hdl.handle.net/10948/36383 , vital:33934
- Description: Industrial Control Systems (ICS) is a term used to describe several types of control systems, including Supervisory Control and Data Acquisition (SCADA) systems, Distributed Control Systems (DCS) and Programmable Logic Controllers (PLC). These systems consist of a combination of control components (e.g. electrical, mechanical, pneumatic) which act together to achieve an industrial objective (e.g., manufacturing, transportation of matter or energy). ICS play a fundamental role in critical infrastructures such as electricity grids, oil, gas and manufacturing industries. Initially ICS had little resemblance to typical enterprise IT systems; they were isolated and running proprietary control protocols using specialized hardware and software. However, with initiatives such as Industry 4.0 and Industrial Internet of Things (IIoT), the nature of ICS has changed significantly. There is an ever-increasing use of commercial operating systems and standard protocols like TCP/IP and Ethernet. Consequently, modern ICS are more and more resembling conventional enterprise IT systems, and it is a well-known fact that these IT systems and networks are known to be vulnerable and that they require extensive management to ensure Confidentiality, Integrity, and Availability. Since ICS are now adopting conventional IT characteristics they are also accepting the associated risks. However, owing to the functional area of ICS, the consequences of these threats are much more severe than those of enterprise IT systems. The need to manage security for these systems with highly skilled IT personnel has become essential. Therefore, this research was focussed to identify which unique security controls for ICS and enterprise IT systems can be combined and/or tailored to provide the organization with a single set of comprehensive security controls. By doing an investigation on existing standards and best practices for both enterprise IT and ICS environments, this study has produced a single set of security controls and presented how the security controls can be integrated into an existing information security governance model which organizations can use as a basis for generating a security framework, used not only to secure their enterprise IT systems, but also including the security of their ICS.
- Full Text:
- Date Issued: 2018
- Authors: Webster, Zynn
- Date: 2018
- Subjects: Computer networks -- Security measures , Data protection Computer security Business enterprises -- Computer networks -- Security measures
- Language: English
- Type: Thesis , Masters , MIT
- Identifier: http://hdl.handle.net/10948/36383 , vital:33934
- Description: Industrial Control Systems (ICS) is a term used to describe several types of control systems, including Supervisory Control and Data Acquisition (SCADA) systems, Distributed Control Systems (DCS) and Programmable Logic Controllers (PLC). These systems consist of a combination of control components (e.g. electrical, mechanical, pneumatic) which act together to achieve an industrial objective (e.g., manufacturing, transportation of matter or energy). ICS play a fundamental role in critical infrastructures such as electricity grids, oil, gas and manufacturing industries. Initially ICS had little resemblance to typical enterprise IT systems; they were isolated and running proprietary control protocols using specialized hardware and software. However, with initiatives such as Industry 4.0 and Industrial Internet of Things (IIoT), the nature of ICS has changed significantly. There is an ever-increasing use of commercial operating systems and standard protocols like TCP/IP and Ethernet. Consequently, modern ICS are more and more resembling conventional enterprise IT systems, and it is a well-known fact that these IT systems and networks are known to be vulnerable and that they require extensive management to ensure Confidentiality, Integrity, and Availability. Since ICS are now adopting conventional IT characteristics they are also accepting the associated risks. However, owing to the functional area of ICS, the consequences of these threats are much more severe than those of enterprise IT systems. The need to manage security for these systems with highly skilled IT personnel has become essential. Therefore, this research was focussed to identify which unique security controls for ICS and enterprise IT systems can be combined and/or tailored to provide the organization with a single set of comprehensive security controls. By doing an investigation on existing standards and best practices for both enterprise IT and ICS environments, this study has produced a single set of security controls and presented how the security controls can be integrated into an existing information security governance model which organizations can use as a basis for generating a security framework, used not only to secure their enterprise IT systems, but also including the security of their ICS.
- Full Text:
- Date Issued: 2018
Governing information security within the context of "bring your own device" in small, medium and micro enterprises
- Authors: Fani, Noluvuyo
- Date: 2017
- Subjects: Data protection , Computer security -- Management , Computer networks -- Security measures
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: http://hdl.handle.net/10948/7626 , vital:22114
- Description: Throughout history, information has been core to the communication, processing and storage of most tasks in the organisation, in this case in Small-Medium and Micro Enterprises (SMMEs). The implementation of these tasks relies on Information and Communication Technology (ICT). ICT is constantly evolving, and with each developed ICT, it becomes important that organisations adapt to the changing environment. Organisations need to adapt to the changing environment by incorporating innovative ICT that allows employees to perform their tasks with ease anywhere and anytime, whilst reducing the costs affiliated with the ICT. In this modern, performing tasks with ease anywhere and anytime requires that the employee is mobile whilst using the ICT. As a result, a relatively new phenomenon called “Bring Your Own Device” (BYOD) is currently infiltrating most organisations, where personally-owned mobile devices are used to access organisational information that will be used to conduct the various tasks of the organisation. The use of BYOD in organisations breeds the previously mentioned benefits such as performing organisational tasks anywhere and anytime. However, with the benefits highlighted for BYOD, organisations should be aware that there are risks to the implementation of BYOD. Therefore, the implementation of BYOD deems that organisations should implement BYOD with proper management thereof.
- Full Text:
- Date Issued: 2017
- Authors: Fani, Noluvuyo
- Date: 2017
- Subjects: Data protection , Computer security -- Management , Computer networks -- Security measures
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: http://hdl.handle.net/10948/7626 , vital:22114
- Description: Throughout history, information has been core to the communication, processing and storage of most tasks in the organisation, in this case in Small-Medium and Micro Enterprises (SMMEs). The implementation of these tasks relies on Information and Communication Technology (ICT). ICT is constantly evolving, and with each developed ICT, it becomes important that organisations adapt to the changing environment. Organisations need to adapt to the changing environment by incorporating innovative ICT that allows employees to perform their tasks with ease anywhere and anytime, whilst reducing the costs affiliated with the ICT. In this modern, performing tasks with ease anywhere and anytime requires that the employee is mobile whilst using the ICT. As a result, a relatively new phenomenon called “Bring Your Own Device” (BYOD) is currently infiltrating most organisations, where personally-owned mobile devices are used to access organisational information that will be used to conduct the various tasks of the organisation. The use of BYOD in organisations breeds the previously mentioned benefits such as performing organisational tasks anywhere and anytime. However, with the benefits highlighted for BYOD, organisations should be aware that there are risks to the implementation of BYOD. Therefore, the implementation of BYOD deems that organisations should implement BYOD with proper management thereof.
- Full Text:
- Date Issued: 2017
A comparison of exact string search algorithms for deep packet inspection
- Authors: Hunt, Kieran
- Date: 2018
- Subjects: Algorithms , Firewalls (Computer security) , Computer networks -- Security measures , Intrusion detection systems (Computer security) , Deep Packet Inspection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/60629 , vital:27807
- Description: Every day, computer networks throughout the world face a constant onslaught of attacks. To combat these, network administrators are forced to employ a multitude of mitigating measures. Devices such as firewalls and Intrusion Detection Systems are prevalent today and employ extensive Deep Packet Inspection to scrutinise each piece of network traffic. Systems such as these usually require specialised hardware to meet the demand imposed by high throughput networks. Hardware like this is extremely expensive and singular in its function. It is with this in mind that the string search algorithms are introduced. These algorithms have been proven to perform well when searching through large volumes of text and may be able to perform equally well in the context of Deep Packet Inspection. String search algorithms are designed to match a single pattern to a substring of a given piece of text. This is not unlike the heuristics employed by traditional Deep Packet Inspection systems. This research compares the performance of a large number of string search algorithms during packet processing. Deep Packet Inspection places stringent restrictions on the reliability and speed of the algorithms due to increased performance pressures. A test system had to be designed in order to properly test the string search algorithms in the context of Deep Packet Inspection. The system allowed for precise and repeatable tests of each algorithm and then for their comparison. Of the algorithms tested, the Horspool and Quick Search algorithms posted the best results for both speed and reliability. The Not So Naive and Rabin-Karp algorithms were slowest overall.
- Full Text:
- Date Issued: 2018
- Authors: Hunt, Kieran
- Date: 2018
- Subjects: Algorithms , Firewalls (Computer security) , Computer networks -- Security measures , Intrusion detection systems (Computer security) , Deep Packet Inspection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/60629 , vital:27807
- Description: Every day, computer networks throughout the world face a constant onslaught of attacks. To combat these, network administrators are forced to employ a multitude of mitigating measures. Devices such as firewalls and Intrusion Detection Systems are prevalent today and employ extensive Deep Packet Inspection to scrutinise each piece of network traffic. Systems such as these usually require specialised hardware to meet the demand imposed by high throughput networks. Hardware like this is extremely expensive and singular in its function. It is with this in mind that the string search algorithms are introduced. These algorithms have been proven to perform well when searching through large volumes of text and may be able to perform equally well in the context of Deep Packet Inspection. String search algorithms are designed to match a single pattern to a substring of a given piece of text. This is not unlike the heuristics employed by traditional Deep Packet Inspection systems. This research compares the performance of a large number of string search algorithms during packet processing. Deep Packet Inspection places stringent restrictions on the reliability and speed of the algorithms due to increased performance pressures. A test system had to be designed in order to properly test the string search algorithms in the context of Deep Packet Inspection. The system allowed for precise and repeatable tests of each algorithm and then for their comparison. Of the algorithms tested, the Horspool and Quick Search algorithms posted the best results for both speed and reliability. The Not So Naive and Rabin-Karp algorithms were slowest overall.
- Full Text:
- Date Issued: 2018
Governing information security using organisational information security profiles
- Authors: Tyukala, Mkhululi
- Date: 2007
- Subjects: Data protection , Computer security -- Management , Computer networks -- Security measures
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9788 , http://hdl.handle.net/10948/626 , Data protection , Computer security -- Management , Computer networks -- Security measures
- Description: The corporate scandals of the last few years have changed the face of information security and its governance. Information security has been elevated to the board of director level due to legislation and corporate governance regulations resulting from the scandals. Now boards of directors have corporate responsibility to ensure that the information assets of an organisation are secure. They are forced to embrace information security and make it part of business strategies. The new support from the board of directors gives information security weight and the voice from the top as well as the financial muscle that other business activities experience. However, as an area that is made up of specialist activities, information security may not easily be comprehended at board level like other business related activities. Yet the board of directors needs to provide oversight of information security. That is, put an information security programme in place to ensure that information is adequately protected. This raises a number of challenges. One of the challenges is how can information security be understood and well informed decisions about it be made at the board level? This dissertation provides a mechanism to present information at board level on how information security is implemented according to the vision of the board of directors. This mechanism is built upon well accepted and documented concepts of information security. The mechanism (termed An Organisational Information Security Profile or OISP) will assist organisations with the initialisation, monitoring, measuring, reporting and reviewing of information security programmes. Ultimately, the OISP will make it possible to know if the information security endeavours of the organisation are effective or not. If the information security programme is found to be ineffective, The OISP will facilitate the pointing out of areas that are ineffective and what caused the ineffectiveness. This dissertation also presents how the effectiveness or ineffctiveness of information security can be presented at board level using well known visualisation methods. Finally the contribution, limits and areas that need more investigation are provided.
- Full Text:
- Date Issued: 2007
- Authors: Tyukala, Mkhululi
- Date: 2007
- Subjects: Data protection , Computer security -- Management , Computer networks -- Security measures
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9788 , http://hdl.handle.net/10948/626 , Data protection , Computer security -- Management , Computer networks -- Security measures
- Description: The corporate scandals of the last few years have changed the face of information security and its governance. Information security has been elevated to the board of director level due to legislation and corporate governance regulations resulting from the scandals. Now boards of directors have corporate responsibility to ensure that the information assets of an organisation are secure. They are forced to embrace information security and make it part of business strategies. The new support from the board of directors gives information security weight and the voice from the top as well as the financial muscle that other business activities experience. However, as an area that is made up of specialist activities, information security may not easily be comprehended at board level like other business related activities. Yet the board of directors needs to provide oversight of information security. That is, put an information security programme in place to ensure that information is adequately protected. This raises a number of challenges. One of the challenges is how can information security be understood and well informed decisions about it be made at the board level? This dissertation provides a mechanism to present information at board level on how information security is implemented according to the vision of the board of directors. This mechanism is built upon well accepted and documented concepts of information security. The mechanism (termed An Organisational Information Security Profile or OISP) will assist organisations with the initialisation, monitoring, measuring, reporting and reviewing of information security programmes. Ultimately, the OISP will make it possible to know if the information security endeavours of the organisation are effective or not. If the information security programme is found to be ineffective, The OISP will facilitate the pointing out of areas that are ineffective and what caused the ineffectiveness. This dissertation also presents how the effectiveness or ineffctiveness of information security can be presented at board level using well known visualisation methods. Finally the contribution, limits and areas that need more investigation are provided.
- Full Text:
- Date Issued: 2007
Securing softswitches from malicious attacks
- Authors: Opie, Jake Weyman
- Date: 2007
- Subjects: Internet telephony -- Security measures , Computer networks -- Security measures , Digital telephone systems , Communication -- Technological innovations , Computer network protocols , TCP/IP (Computer network protocol) , Switching theory
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4683 , http://hdl.handle.net/10962/d1007714 , Internet telephony -- Security measures , Computer networks -- Security measures , Digital telephone systems , Communication -- Technological innovations , Computer network protocols , TCP/IP (Computer network protocol) , Switching theory
- Description: Traditionally, real-time communication, such as voice calls, has run on separate, closed networks. Of all the limitations that these networks had, the ability of malicious attacks to cripple communication was not a crucial one. This situation has changed radically now that real-time communication and data have merged to share the same network. The objective of this project is to investigate the securing of softswitches with functionality similar to Private Branch Exchanges (PBX) from malicious attacks. The focus of the project will be a practical investigation of how to secure ILANGA, an ASTERISK-based system under development at Rhodes University. The practical investigation that focuses on ILANGA is based on performing six varied experiments on the different components of ILANGA. Before the six experiments are performed, basic preliminary security measures and the restrictions placed on the access to the database are discussed. The outcomes of these experiments are discussed and the precise reasons why these attacks were either successful or unsuccessful are given. Suggestions of a theoretical nature on how to defend against the successful attacks are also presented.
- Full Text:
- Date Issued: 2007
- Authors: Opie, Jake Weyman
- Date: 2007
- Subjects: Internet telephony -- Security measures , Computer networks -- Security measures , Digital telephone systems , Communication -- Technological innovations , Computer network protocols , TCP/IP (Computer network protocol) , Switching theory
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4683 , http://hdl.handle.net/10962/d1007714 , Internet telephony -- Security measures , Computer networks -- Security measures , Digital telephone systems , Communication -- Technological innovations , Computer network protocols , TCP/IP (Computer network protocol) , Switching theory
- Description: Traditionally, real-time communication, such as voice calls, has run on separate, closed networks. Of all the limitations that these networks had, the ability of malicious attacks to cripple communication was not a crucial one. This situation has changed radically now that real-time communication and data have merged to share the same network. The objective of this project is to investigate the securing of softswitches with functionality similar to Private Branch Exchanges (PBX) from malicious attacks. The focus of the project will be a practical investigation of how to secure ILANGA, an ASTERISK-based system under development at Rhodes University. The practical investigation that focuses on ILANGA is based on performing six varied experiments on the different components of ILANGA. Before the six experiments are performed, basic preliminary security measures and the restrictions placed on the access to the database are discussed. The outcomes of these experiments are discussed and the precise reasons why these attacks were either successful or unsuccessful are given. Suggestions of a theoretical nature on how to defend against the successful attacks are also presented.
- Full Text:
- Date Issued: 2007
Educating users about information security by means of game play
- Authors: Monk, Thomas Philippus
- Date: 2011
- Subjects: Computer security , Educational games -- Design , Computer networks -- Security measures
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9748 , http://hdl.handle.net/10948/1493 , Computer security , Educational games -- Design , Computer networks -- Security measures
- Description: Information is necessary for any business to function. However, if one does not manage one’s information assets properly then one’s business is likely to be at risk. By implementing Information Security controls, procedures, and/or safeguards one can secure information assets against risks. The risks of an organisation can be mitigated if employees implement safety measures. However, employees are often unable to work securely due to a lack of knowledge. This dissertation evaluates the premise that a computer game could be used to educate employees about Information Security. A game was developed with the aim of educating employees in this regard. If people were motivated to play the game, without external motivation from an organisation, then people would also, indirectly, be motivated to learn about Information Security. Therefore, a secondary aim of this game was to be self-motivating. An experiment was conducted in order to test whether or not these aims were met. The experiment was conducted on a play test group and a control group. The play test group played the game before completing a questionnaire that tested the information security knowledge of participants, while the control group simply completed the questionnaire. The two groups’ answers were compared in order to obtain results. This dissertation discusses the research design of the experiment and also provides an analysis of the results. The game design will be discussed which provides guidelines for future game designers to follow. The experiment indicated that the game is motivational, but perhaps not educational enough. However, the results suggest that a computer game can still be used to teach users about Information Security. Factors that involved consequence and repetition contributed towards the educational value of the game, whilst competitiveness and rewards contributed to the motivational aspect of the game.
- Full Text:
- Date Issued: 2011
- Authors: Monk, Thomas Philippus
- Date: 2011
- Subjects: Computer security , Educational games -- Design , Computer networks -- Security measures
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9748 , http://hdl.handle.net/10948/1493 , Computer security , Educational games -- Design , Computer networks -- Security measures
- Description: Information is necessary for any business to function. However, if one does not manage one’s information assets properly then one’s business is likely to be at risk. By implementing Information Security controls, procedures, and/or safeguards one can secure information assets against risks. The risks of an organisation can be mitigated if employees implement safety measures. However, employees are often unable to work securely due to a lack of knowledge. This dissertation evaluates the premise that a computer game could be used to educate employees about Information Security. A game was developed with the aim of educating employees in this regard. If people were motivated to play the game, without external motivation from an organisation, then people would also, indirectly, be motivated to learn about Information Security. Therefore, a secondary aim of this game was to be self-motivating. An experiment was conducted in order to test whether or not these aims were met. The experiment was conducted on a play test group and a control group. The play test group played the game before completing a questionnaire that tested the information security knowledge of participants, while the control group simply completed the questionnaire. The two groups’ answers were compared in order to obtain results. This dissertation discusses the research design of the experiment and also provides an analysis of the results. The game design will be discussed which provides guidelines for future game designers to follow. The experiment indicated that the game is motivational, but perhaps not educational enough. However, the results suggest that a computer game can still be used to teach users about Information Security. Factors that involved consequence and repetition contributed towards the educational value of the game, whilst competitiveness and rewards contributed to the motivational aspect of the game.
- Full Text:
- Date Issued: 2011
Towards understanding and mitigating attacks leveraging zero-day exploits
- Authors: Smit, Liam
- Date: 2019
- Subjects: Computer crimes -- Prevention , Data protection , Hacking , Computer security , Computer networks -- Security measures , Computers -- Access control , Malware (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/115718 , vital:34218
- Description: Zero-day vulnerabilities are unknown and therefore not addressed with the result that they can be exploited by attackers to gain unauthorised system access. In order to understand and mitigate against attacks leveraging zero-days or unknown techniques, it is necessary to study the vulnerabilities, exploits and attacks that make use of them. In recent years there have been a number of leaks publishing such attacks using various methods to exploit vulnerabilities. This research seeks to understand what types of vulnerabilities exist, why and how these are exploited, and how to defend against such attacks by either mitigating the vulnerabilities or the method / process of exploiting them. By moving beyond merely remedying the vulnerabilities to defences that are able to prevent or detect the actions taken by attackers, the security of the information system will be better positioned to deal with future unknown threats. An interesting finding is how attackers exploit moving beyond the observable bounds to circumvent security defences, for example, compromising syslog servers, or going down to lower system rings to gain access. However, defenders can counter this by employing defences that are external to the system preventing attackers from disabling them or removing collected evidence after gaining system access. Attackers are able to defeat air-gaps via the leakage of electromagnetic radiation as well as misdirect attribution by planting false artefacts for forensic analysis and attacking from third party information systems. They analyse the methods of other attackers to learn new techniques. An example of this is the Umbrage project whereby malware is analysed to decide whether it should be implemented as a proof of concept. Another important finding is that attackers respect defence mechanisms such as: remote syslog (e.g. firewall), core dump files, database auditing, and Tripwire (e.g. SlyHeretic). These defences all have the potential to result in the attacker being discovered. Attackers must either negate the defence mechanism or find unprotected targets. Defenders can use technologies such as encryption to defend against interception and man-in-the-middle attacks. They can also employ honeytokens and honeypots to alarm misdirect, slow down and learn from attackers. By employing various tactics defenders are able to increase their chance of detecting and time to react to attacks, even those exploiting hitherto unknown vulnerabilities. To summarize the information presented in this thesis and to show the practical importance thereof, an examination is presented of the NSA's network intrusion of the SWIFT organisation. It shows that the firewalls were exploited with remote code execution zerodays. This attack has a striking parallel in the approach used in the recent VPNFilter malware. If nothing else, the leaks provide information to other actors on how to attack and what to avoid. However, by studying state actors, we can gain insight into what other actors with fewer resources can do in the future.
- Full Text:
- Date Issued: 2019
- Authors: Smit, Liam
- Date: 2019
- Subjects: Computer crimes -- Prevention , Data protection , Hacking , Computer security , Computer networks -- Security measures , Computers -- Access control , Malware (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/115718 , vital:34218
- Description: Zero-day vulnerabilities are unknown and therefore not addressed with the result that they can be exploited by attackers to gain unauthorised system access. In order to understand and mitigate against attacks leveraging zero-days or unknown techniques, it is necessary to study the vulnerabilities, exploits and attacks that make use of them. In recent years there have been a number of leaks publishing such attacks using various methods to exploit vulnerabilities. This research seeks to understand what types of vulnerabilities exist, why and how these are exploited, and how to defend against such attacks by either mitigating the vulnerabilities or the method / process of exploiting them. By moving beyond merely remedying the vulnerabilities to defences that are able to prevent or detect the actions taken by attackers, the security of the information system will be better positioned to deal with future unknown threats. An interesting finding is how attackers exploit moving beyond the observable bounds to circumvent security defences, for example, compromising syslog servers, or going down to lower system rings to gain access. However, defenders can counter this by employing defences that are external to the system preventing attackers from disabling them or removing collected evidence after gaining system access. Attackers are able to defeat air-gaps via the leakage of electromagnetic radiation as well as misdirect attribution by planting false artefacts for forensic analysis and attacking from third party information systems. They analyse the methods of other attackers to learn new techniques. An example of this is the Umbrage project whereby malware is analysed to decide whether it should be implemented as a proof of concept. Another important finding is that attackers respect defence mechanisms such as: remote syslog (e.g. firewall), core dump files, database auditing, and Tripwire (e.g. SlyHeretic). These defences all have the potential to result in the attacker being discovered. Attackers must either negate the defence mechanism or find unprotected targets. Defenders can use technologies such as encryption to defend against interception and man-in-the-middle attacks. They can also employ honeytokens and honeypots to alarm misdirect, slow down and learn from attackers. By employing various tactics defenders are able to increase their chance of detecting and time to react to attacks, even those exploiting hitherto unknown vulnerabilities. To summarize the information presented in this thesis and to show the practical importance thereof, an examination is presented of the NSA's network intrusion of the SWIFT organisation. It shows that the firewalls were exploited with remote code execution zerodays. This attack has a striking parallel in the approach used in the recent VPNFilter malware. If nothing else, the leaks provide information to other actors on how to attack and what to avoid. However, by studying state actors, we can gain insight into what other actors with fewer resources can do in the future.
- Full Text:
- Date Issued: 2019
A framework to mitigate phishing threats
- Authors: Frauenstein, Edwin Donald
- Date: 2013
- Subjects: Computer networks -- Security measures , Mobile computing -- Security measures , Online social networks -- Security measures
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9832 , http://hdl.handle.net/10948/d1021208
- Description: We live today in the information age with users being able to access and share information freely by using both personal computers and their handheld devices. This, in turn, has been made possible by the Internet. However, this poses security risks as attempts are made to use this same environment in order to compromise the confidentiality, integrity and availability of information. Accordingly, there is an urgent need for users and organisations to protect their information resources from agents posing a security threat. Organisations typically spend large amounts of money as well as dedicating resources to improve their technological defences against general security threats. However, the agents posing these threats are adopting social engineering techniques in order to bypass the technical measures which organisations are putting in place. These social engineering techniques are often effective because they target human behaviour, something which the majority of researchers believe is a far easier alternative than hacking information systems. As such, phishing effectively makes use of a combination of social engineering techniques which involve crafty technical emails and website designs which gain the trust of their victims. Within an organisational context, there are a number of areas which phishers exploit. These areas include human factors, organisational aspects and technological controls. Ironically, these same areas serve simultaneously as security measures against phishing attacks. However, each of these three areas mentioned above are characterised by gaps which arise as a result of human involvement. As a result, the current approach to mitigating phishing threats comprises a single-layer defence model only. However, this study proposes a holistic model which integrates each of these three areas by strengthening the human element in each of these areas by means of a security awareness, training and education programme.
- Full Text:
- Date Issued: 2013
- Authors: Frauenstein, Edwin Donald
- Date: 2013
- Subjects: Computer networks -- Security measures , Mobile computing -- Security measures , Online social networks -- Security measures
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9832 , http://hdl.handle.net/10948/d1021208
- Description: We live today in the information age with users being able to access and share information freely by using both personal computers and their handheld devices. This, in turn, has been made possible by the Internet. However, this poses security risks as attempts are made to use this same environment in order to compromise the confidentiality, integrity and availability of information. Accordingly, there is an urgent need for users and organisations to protect their information resources from agents posing a security threat. Organisations typically spend large amounts of money as well as dedicating resources to improve their technological defences against general security threats. However, the agents posing these threats are adopting social engineering techniques in order to bypass the technical measures which organisations are putting in place. These social engineering techniques are often effective because they target human behaviour, something which the majority of researchers believe is a far easier alternative than hacking information systems. As such, phishing effectively makes use of a combination of social engineering techniques which involve crafty technical emails and website designs which gain the trust of their victims. Within an organisational context, there are a number of areas which phishers exploit. These areas include human factors, organisational aspects and technological controls. Ironically, these same areas serve simultaneously as security measures against phishing attacks. However, each of these three areas mentioned above are characterised by gaps which arise as a result of human involvement. As a result, the current approach to mitigating phishing threats comprises a single-layer defence model only. However, this study proposes a holistic model which integrates each of these three areas by strengthening the human element in each of these areas by means of a security awareness, training and education programme.
- Full Text:
- Date Issued: 2013
An investigation into interoperable end-to-end mobile web service security
- Authors: Moyo, Thamsanqa
- Date: 2008
- Subjects: Web services , Mobile computing , Smartphones , Internetworking (Telecommunication) , Computer networks -- Security measures , XML (Document markup language) , Microsoft .NET Framework , Java (Computer program language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4595 , http://hdl.handle.net/10962/d1004838 , Web services , Mobile computing , Smartphones , Internetworking (Telecommunication) , Computer networks -- Security measures , XML (Document markup language) , Microsoft .NET Framework , Java (Computer program language)
- Description: The capacity to engage in web services transactions on smartphones is growing as these devices become increasingly powerful and sophisticated. This capacity for mobile web services is being realised through mobile applications that consume web services hosted on larger computing devices. This thesis investigates the effect that end-to-end web services security has on the interoperability between mobile web services requesters and traditional web services providers. SOAP web services are the preferred web services approach for this investigation. Although WS-Security is recognised as demanding on mobile hardware and network resources, the selection of appropriate WS-Security mechanisms lessens this burden. An attempt to implement such mechanisms on smartphones is carried out via an experiment. Smartphones are selected as the mobile device type used in the experiment. The experiment is conducted on the Java Micro Edition (Java ME) and the .NET Compact Framework (.NET CF) smartphone platforms. The experiment shows that the implementation of interoperable, end-to-end, mobile web services security on both platforms is reliant on third-party libraries. This reliance on third-party libraries results in poor developer support and exposes developers to the complexity of cryptography. The experiment also shows that there are no standard message size optimisation libraries available for both platforms. The implementation carried out on the .NET CF is also shown to rely on the underlying operating system. It is concluded that standard WS-Security APIs must be provided on smartphone platforms to avoid the problems of poor developer support and the additional complexity of cryptography. It is recommended that these APIs include a message optimisation technique. It is further recommended that WS-Security APIs be completely operating system independent when they are implemented in managed code. This thesis contributes by: providing a snapshot of mobile web services security; identifying the smartphone platform state of readiness for end-to-end secure web services; and providing a set of recommendations that may improve this state of readiness. These contributions are of increasing importance as mobile web services evolve from a simple point-to-point environment to the more complex enterprise environment.
- Full Text:
- Date Issued: 2008
- Authors: Moyo, Thamsanqa
- Date: 2008
- Subjects: Web services , Mobile computing , Smartphones , Internetworking (Telecommunication) , Computer networks -- Security measures , XML (Document markup language) , Microsoft .NET Framework , Java (Computer program language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4595 , http://hdl.handle.net/10962/d1004838 , Web services , Mobile computing , Smartphones , Internetworking (Telecommunication) , Computer networks -- Security measures , XML (Document markup language) , Microsoft .NET Framework , Java (Computer program language)
- Description: The capacity to engage in web services transactions on smartphones is growing as these devices become increasingly powerful and sophisticated. This capacity for mobile web services is being realised through mobile applications that consume web services hosted on larger computing devices. This thesis investigates the effect that end-to-end web services security has on the interoperability between mobile web services requesters and traditional web services providers. SOAP web services are the preferred web services approach for this investigation. Although WS-Security is recognised as demanding on mobile hardware and network resources, the selection of appropriate WS-Security mechanisms lessens this burden. An attempt to implement such mechanisms on smartphones is carried out via an experiment. Smartphones are selected as the mobile device type used in the experiment. The experiment is conducted on the Java Micro Edition (Java ME) and the .NET Compact Framework (.NET CF) smartphone platforms. The experiment shows that the implementation of interoperable, end-to-end, mobile web services security on both platforms is reliant on third-party libraries. This reliance on third-party libraries results in poor developer support and exposes developers to the complexity of cryptography. The experiment also shows that there are no standard message size optimisation libraries available for both platforms. The implementation carried out on the .NET CF is also shown to rely on the underlying operating system. It is concluded that standard WS-Security APIs must be provided on smartphone platforms to avoid the problems of poor developer support and the additional complexity of cryptography. It is recommended that these APIs include a message optimisation technique. It is further recommended that WS-Security APIs be completely operating system independent when they are implemented in managed code. This thesis contributes by: providing a snapshot of mobile web services security; identifying the smartphone platform state of readiness for end-to-end secure web services; and providing a set of recommendations that may improve this state of readiness. These contributions are of increasing importance as mobile web services evolve from a simple point-to-point environment to the more complex enterprise environment.
- Full Text:
- Date Issued: 2008
Assessing program code through static structural similarity
- Authors: Naude, Kevin Alexander
- Date: 2007
- Subjects: Computer networks -- Security measures , Internet -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10478 , http://hdl.handle.net/10948/578 , Computer networks -- Security measures , Internet -- Security measures
- Description: Learning to write software requires much practice and frequent assessment. Consequently, the use of computers to assist in the assessment of computer programs has been important in supporting large classes at universities. The main approaches to the problem are dynamic analysis (testing student programs for expected output) and static analysis (direct analysis of the program code). The former is very sensitive to all kinds of errors in student programs, while the latter has traditionally only been used to assess quality, and not correctness. This research focusses on the application of static analysis, particularly structural similarity, to marking student programs. Existing traditional measures of similarity are limiting in that they are usually only effective on tree structures. In this regard they do not easily support dependencies in program code. Contemporary measures of structural similarity, such as similarity flooding, usually rely on an internal normalisation of scores. The effect is that the scores only have relative meaning, and cannot be interpreted in isolation, ie. they are not meaningful for assessment. The SimRank measure is shown to have the same problem, but not because of normalisation. The problem with the SimRank measure arises from the fact that its scores depend on all possible mappings between the children of vertices being compared. The main contribution of this research is a novel graph similarity measure, the Weighted Assignment Similarity measure. It is related to SimRank, but derives propagation scores from only the locally optimal mapping between child vertices. The resulting similarity scores may be regarded as the percentage of mutual coverage between graphs. The measure is proven to converge for all directed acyclic graphs, and an efficient implementation is outlined for this case. Attributes on graph vertices and edges are often used to capture domain specific information which is not structural in nature. It has been suggested that these should influence the similarity propagation, but no clear method for doing this has been reported. The second important contribution of this research is a general method for incorporating these local attribute similarities into the larger similarity propagation method. An example of attributes in program graphs are identifier names. The choice of identifiers in programs is arbitrary as they are purely symbolic. A problem facing any comparison between programs is that they are unlikely to use the same set of identifiers. This problem indicates that a mapping between the identifier sets is required. The third contribution of this research is a method for applying the structural similarity measure in a two step process to find an optimal identifier mapping. This approach is both novel and valuable as it cleverly reuses the similarity measure as an existing resource. In general, programming assignments allow a large variety of solutions. Assessing student programs through structural similarity is only feasible if the diversity in the solution space can be addressed. This study narrows program diversity through a set of semantic preserving program transformations that convert programs into a normal form. The application of the Weighted Assignment Similarity measure to marking student programs is investigated, and strong correlations are found with the human marker. It is shown that the most accurate assessment requires that programs not only be compared with a set of good solutions, but rather a mixed set of programs of varying levels of correctness. This research represents the first documented successful application of structural similarity to the marking of student programs.
- Full Text:
- Date Issued: 2007
- Authors: Naude, Kevin Alexander
- Date: 2007
- Subjects: Computer networks -- Security measures , Internet -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10478 , http://hdl.handle.net/10948/578 , Computer networks -- Security measures , Internet -- Security measures
- Description: Learning to write software requires much practice and frequent assessment. Consequently, the use of computers to assist in the assessment of computer programs has been important in supporting large classes at universities. The main approaches to the problem are dynamic analysis (testing student programs for expected output) and static analysis (direct analysis of the program code). The former is very sensitive to all kinds of errors in student programs, while the latter has traditionally only been used to assess quality, and not correctness. This research focusses on the application of static analysis, particularly structural similarity, to marking student programs. Existing traditional measures of similarity are limiting in that they are usually only effective on tree structures. In this regard they do not easily support dependencies in program code. Contemporary measures of structural similarity, such as similarity flooding, usually rely on an internal normalisation of scores. The effect is that the scores only have relative meaning, and cannot be interpreted in isolation, ie. they are not meaningful for assessment. The SimRank measure is shown to have the same problem, but not because of normalisation. The problem with the SimRank measure arises from the fact that its scores depend on all possible mappings between the children of vertices being compared. The main contribution of this research is a novel graph similarity measure, the Weighted Assignment Similarity measure. It is related to SimRank, but derives propagation scores from only the locally optimal mapping between child vertices. The resulting similarity scores may be regarded as the percentage of mutual coverage between graphs. The measure is proven to converge for all directed acyclic graphs, and an efficient implementation is outlined for this case. Attributes on graph vertices and edges are often used to capture domain specific information which is not structural in nature. It has been suggested that these should influence the similarity propagation, but no clear method for doing this has been reported. The second important contribution of this research is a general method for incorporating these local attribute similarities into the larger similarity propagation method. An example of attributes in program graphs are identifier names. The choice of identifiers in programs is arbitrary as they are purely symbolic. A problem facing any comparison between programs is that they are unlikely to use the same set of identifiers. This problem indicates that a mapping between the identifier sets is required. The third contribution of this research is a method for applying the structural similarity measure in a two step process to find an optimal identifier mapping. This approach is both novel and valuable as it cleverly reuses the similarity measure as an existing resource. In general, programming assignments allow a large variety of solutions. Assessing student programs through structural similarity is only feasible if the diversity in the solution space can be addressed. This study narrows program diversity through a set of semantic preserving program transformations that convert programs into a normal form. The application of the Weighted Assignment Similarity measure to marking student programs is investigated, and strong correlations are found with the human marker. It is shown that the most accurate assessment requires that programs not only be compared with a set of good solutions, but rather a mixed set of programs of varying levels of correctness. This research represents the first documented successful application of structural similarity to the marking of student programs.
- Full Text:
- Date Issued: 2007
Towards a threat assessment framework for consumer health wearables
- Authors: Mnjama, Javan Joshua
- Date: 2018
- Subjects: Activity trackers (Wearable technology) , Computer networks -- Security measures , Data protection , Information storage and retrieval systems -- Security systems , Computer security -- Software , Consumer Health Wearable Threat Assessment Framework , Design Science Research
- Language: English
- Type: text , Thesis , Masters , MCom
- Identifier: http://hdl.handle.net/10962/62649 , vital:28225
- Description: The collection of health data such as physical activity, consumption and physiological data through the use of consumer health wearables via fitness trackers are very beneficial for the promotion of physical wellness. However, consumer health wearables and their associated applications are known to have privacy and security concerns that can potentially make the collected personal health data vulnerable to hackers. These concerns are attributed to security theoretical frameworks not sufficiently addressing the entirety of privacy and security concerns relating to the diverse technological ecosystem of consumer health wearables. The objective of this research was therefore to develop a threat assessment framework that can be used to guide the detection of vulnerabilities which affect consumer health wearables and their associated applications. To meet this objective, the Design Science Research methodology was used to develop the desired artefact (Consumer Health Wearable Threat Assessment Framework). The framework is comprised of fourteen vulnerabilities classified according to Authentication, Authorization, Availability, Confidentiality, Non-Repudiation and Integrity. Through developing the artefact, the threat assessment framework was demonstrated on two fitness trackers and their associated applications. It was discovered, that the framework was able to identify how these vulnerabilities affected, these two test cases based on the classification categories of the framework. The framework was also evaluated by four security experts who assessed the quality, utility and efficacy of the framework. Experts, supported the use of the framework as a relevant and comprehensive framework to guide the detection of vulnerabilities towards consumer health wearables and their associated applications. The implication of this research study is that the framework can be used by developers to better identify the vulnerabilities of consumer health wearables and their associated applications. This will assist in creating a more securer environment for the storage and use of health data by consumer health wearables.
- Full Text:
- Date Issued: 2018
- Authors: Mnjama, Javan Joshua
- Date: 2018
- Subjects: Activity trackers (Wearable technology) , Computer networks -- Security measures , Data protection , Information storage and retrieval systems -- Security systems , Computer security -- Software , Consumer Health Wearable Threat Assessment Framework , Design Science Research
- Language: English
- Type: text , Thesis , Masters , MCom
- Identifier: http://hdl.handle.net/10962/62649 , vital:28225
- Description: The collection of health data such as physical activity, consumption and physiological data through the use of consumer health wearables via fitness trackers are very beneficial for the promotion of physical wellness. However, consumer health wearables and their associated applications are known to have privacy and security concerns that can potentially make the collected personal health data vulnerable to hackers. These concerns are attributed to security theoretical frameworks not sufficiently addressing the entirety of privacy and security concerns relating to the diverse technological ecosystem of consumer health wearables. The objective of this research was therefore to develop a threat assessment framework that can be used to guide the detection of vulnerabilities which affect consumer health wearables and their associated applications. To meet this objective, the Design Science Research methodology was used to develop the desired artefact (Consumer Health Wearable Threat Assessment Framework). The framework is comprised of fourteen vulnerabilities classified according to Authentication, Authorization, Availability, Confidentiality, Non-Repudiation and Integrity. Through developing the artefact, the threat assessment framework was demonstrated on two fitness trackers and their associated applications. It was discovered, that the framework was able to identify how these vulnerabilities affected, these two test cases based on the classification categories of the framework. The framework was also evaluated by four security experts who assessed the quality, utility and efficacy of the framework. Experts, supported the use of the framework as a relevant and comprehensive framework to guide the detection of vulnerabilities towards consumer health wearables and their associated applications. The implication of this research study is that the framework can be used by developers to better identify the vulnerabilities of consumer health wearables and their associated applications. This will assist in creating a more securer environment for the storage and use of health data by consumer health wearables.
- Full Text:
- Date Issued: 2018
A national strategy towards cultivating a cybersecurity culture in South Africa
- Authors: Gcaza, Noluxolo
- Date: 2017
- Subjects: Computer networks -- Security measures , Cyberspace -- Security measures Computer security -- South Africa Subculture -- South Africa
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10948/13735 , vital:27303
- Description: In modern society, cyberspace is interwoven into the daily lives of many. Cyberspace is increasingly redefining how people communicate as well as gain access to and share information. Technology has transformed the way the business world operates by introducing new ways of trading goods and services whilst bolstering traditional business methods. It has also altered the way nations govern. Thus individuals, organisations and nations are relying on this technology to perform significant functions. Alongside the positive innovations afforded by cyberspace, however, those who use it are exposed to a variety of risks. Cyberspace is beset by criminal activities such as cybercrime, fraud, identity theft to name but a few. Nonetheless, the negative impact of these cyber threats does not outweigh the advantages of cyberspace. In light of such threats, there is a call for all entities that reap the benefits of online services to institute cybersecurity. As such, cybersecurity is a necessity for individuals, organisations and nations alike. In practice, cybersecurity focuses on preventing and mitigating certain security risks that might compromise the security of relevant assets. For a long time, technology-centred measures have been deemed the most significant solution for mitigating such risks. However, after a legacy of unsuccessful technological efforts, it became clear that such solutions in isolation are insufficient to mitigate all cyber-related risks. This is mainly due to the role that humans play in the security process, that is, the human factor. In isolation, technology-centred measures tend to fail to counter the human factor because of the perception among many users that security measures are an obstacle and consequently a waste of time. This user perception can be credited to the perceived difficulty of the security measure, as well as apparent mistrust and misinterpretation of the measure. Hence, cybersecurity necessitates the development of a solution that encourages acceptable user behaviour in the reality of cyberspace. The cultivation of a cybersecurity culture is thus regarded as the best approach for addressing the human factors that weaken the cybersecurity chain. While the role of culture in pursuing cybersecurity is well appreciated, research focusing on defining and measuring cybersecurity culture is still in its infancy. Furthermore, studies have shown that there are no widely accepted key concepts that delimit a cybersecurity culture. However, the notion that such a culture is not well-delineated has not prevented national governments from pursuing a culture in which all citizens behave in a way that promotes cybersecurity. As a result, many countries now offer national cybersecurity campaigns to foster a culture of cybersecurity at a national level. South Africa is among the nations that have identified cultivating a culture of cybersecurity as a strategic priority. However, there is an apparent lack of a practical plan to cultivate such a cybersecurity culture in South Africa. Thus, this study sought firstly to confirm from the existing body of knowledge that cybersecurity culture is indeed ill-defined and, secondly, to delineate what constitutes a national cybersecurity culture. Finally, and primarily, it sought to devise a national strategy that would assist SA in fulfilling its objective of cultivating a culture of cybersecurity on a national level.
- Full Text:
- Date Issued: 2017
- Authors: Gcaza, Noluxolo
- Date: 2017
- Subjects: Computer networks -- Security measures , Cyberspace -- Security measures Computer security -- South Africa Subculture -- South Africa
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10948/13735 , vital:27303
- Description: In modern society, cyberspace is interwoven into the daily lives of many. Cyberspace is increasingly redefining how people communicate as well as gain access to and share information. Technology has transformed the way the business world operates by introducing new ways of trading goods and services whilst bolstering traditional business methods. It has also altered the way nations govern. Thus individuals, organisations and nations are relying on this technology to perform significant functions. Alongside the positive innovations afforded by cyberspace, however, those who use it are exposed to a variety of risks. Cyberspace is beset by criminal activities such as cybercrime, fraud, identity theft to name but a few. Nonetheless, the negative impact of these cyber threats does not outweigh the advantages of cyberspace. In light of such threats, there is a call for all entities that reap the benefits of online services to institute cybersecurity. As such, cybersecurity is a necessity for individuals, organisations and nations alike. In practice, cybersecurity focuses on preventing and mitigating certain security risks that might compromise the security of relevant assets. For a long time, technology-centred measures have been deemed the most significant solution for mitigating such risks. However, after a legacy of unsuccessful technological efforts, it became clear that such solutions in isolation are insufficient to mitigate all cyber-related risks. This is mainly due to the role that humans play in the security process, that is, the human factor. In isolation, technology-centred measures tend to fail to counter the human factor because of the perception among many users that security measures are an obstacle and consequently a waste of time. This user perception can be credited to the perceived difficulty of the security measure, as well as apparent mistrust and misinterpretation of the measure. Hence, cybersecurity necessitates the development of a solution that encourages acceptable user behaviour in the reality of cyberspace. The cultivation of a cybersecurity culture is thus regarded as the best approach for addressing the human factors that weaken the cybersecurity chain. While the role of culture in pursuing cybersecurity is well appreciated, research focusing on defining and measuring cybersecurity culture is still in its infancy. Furthermore, studies have shown that there are no widely accepted key concepts that delimit a cybersecurity culture. However, the notion that such a culture is not well-delineated has not prevented national governments from pursuing a culture in which all citizens behave in a way that promotes cybersecurity. As a result, many countries now offer national cybersecurity campaigns to foster a culture of cybersecurity at a national level. South Africa is among the nations that have identified cultivating a culture of cybersecurity as a strategic priority. However, there is an apparent lack of a practical plan to cultivate such a cybersecurity culture in South Africa. Thus, this study sought firstly to confirm from the existing body of knowledge that cybersecurity culture is indeed ill-defined and, secondly, to delineate what constitutes a national cybersecurity culture. Finally, and primarily, it sought to devise a national strategy that would assist SA in fulfilling its objective of cultivating a culture of cybersecurity on a national level.
- Full Text:
- Date Issued: 2017
An analysis of the risk exposure of adopting IPV6 in enterprise networks
- Authors: Berko, Istvan Sandor
- Date: 2015
- Subjects: International Workshop on Deploying the Future Infrastructure , Computer networks , Computer networks -- Security measures , Computer network protocols
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4722 , http://hdl.handle.net/10962/d1018918
- Description: The IPv6 increased address pool presents changes in resource impact to the Enterprise that, if not adequately addressed, can change risks that are locally significant in IPv4 to risks that can impact the Enterprise in its entirety. The expected conclusion is that the IPv6 environment will impose significant changes in the Enterprise environment - which may negatively impact organisational security if the IPv6 nuances are not adequately addressed. This thesis reviews the risks related to the operation of enterprise networks with the introduction of IPv6. The global trends are discussed to provide insight and background to the IPv6 research space. Analysing the current state of readiness in enterprise networks, quantifies the value of developing this thesis. The base controls that should be deployed in enterprise networks to prevent the abuse of IPv6 through tunnelling and the protection of the enterprise access layer are discussed. A series of case studies are presented which identify and analyse the impact of certain changes in the IPv6 protocol on the enterprise networks. The case studies also identify mitigation techniques to reduce risk.
- Full Text:
- Date Issued: 2015
- Authors: Berko, Istvan Sandor
- Date: 2015
- Subjects: International Workshop on Deploying the Future Infrastructure , Computer networks , Computer networks -- Security measures , Computer network protocols
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4722 , http://hdl.handle.net/10962/d1018918
- Description: The IPv6 increased address pool presents changes in resource impact to the Enterprise that, if not adequately addressed, can change risks that are locally significant in IPv4 to risks that can impact the Enterprise in its entirety. The expected conclusion is that the IPv6 environment will impose significant changes in the Enterprise environment - which may negatively impact organisational security if the IPv6 nuances are not adequately addressed. This thesis reviews the risks related to the operation of enterprise networks with the introduction of IPv6. The global trends are discussed to provide insight and background to the IPv6 research space. Analysing the current state of readiness in enterprise networks, quantifies the value of developing this thesis. The base controls that should be deployed in enterprise networks to prevent the abuse of IPv6 through tunnelling and the protection of the enterprise access layer are discussed. A series of case studies are presented which identify and analyse the impact of certain changes in the IPv6 protocol on the enterprise networks. The case studies also identify mitigation techniques to reduce risk.
- Full Text:
- Date Issued: 2015
Securing media streams in an Asterisk-based environment and evaluating the resulting performance cost
- Authors: Clayton, Bradley
- Date: 2007 , 2007-01-08
- Subjects: Asterisk (Computer file) , Computer networks -- Security measures , Internet telephony -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4647 , http://hdl.handle.net/10962/d1006606 , Asterisk (Computer file) , Computer networks -- Security measures , Internet telephony -- Security measures
- Description: When adding Confidentiality, Integrity and Availability (CIA) to a multi-user VoIP (Voice over IP) system, performance and quality are at risk. The aim of this study is twofold. Firstly, it describes current methods suitable to secure voice streams within a VoIP system and make them available in an Asterisk-based VoIP environment. (Asterisk is a well established, open-source, TDM/VoIP PBX.) Secondly, this study evaluates the performance cost incurred after implementing each security method within the Asterisk-based system, using a special testbed suite, named DRAPA, which was developed expressly for this study. The three security methods implemented and studied were IPSec (Internet Protocol Security), SRTP (Secure Real-time Transport Protocol), and SIAX2 (Secure Inter-Asterisk eXchange 2 protocol). From the experiments, it was found that bandwidth and CPU usage were significantly affected by the addition of CIA. In ranking the three security methods in terms of these two resources, it was found that SRTP incurs the least bandwidth overhead, followed by SIAX2 and then IPSec. Where CPU utilisation is concerned, it was found that SIAX2 incurs the least overhead, followed by IPSec, and then SRTP.
- Full Text:
- Date Issued: 2007
- Authors: Clayton, Bradley
- Date: 2007 , 2007-01-08
- Subjects: Asterisk (Computer file) , Computer networks -- Security measures , Internet telephony -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4647 , http://hdl.handle.net/10962/d1006606 , Asterisk (Computer file) , Computer networks -- Security measures , Internet telephony -- Security measures
- Description: When adding Confidentiality, Integrity and Availability (CIA) to a multi-user VoIP (Voice over IP) system, performance and quality are at risk. The aim of this study is twofold. Firstly, it describes current methods suitable to secure voice streams within a VoIP system and make them available in an Asterisk-based VoIP environment. (Asterisk is a well established, open-source, TDM/VoIP PBX.) Secondly, this study evaluates the performance cost incurred after implementing each security method within the Asterisk-based system, using a special testbed suite, named DRAPA, which was developed expressly for this study. The three security methods implemented and studied were IPSec (Internet Protocol Security), SRTP (Secure Real-time Transport Protocol), and SIAX2 (Secure Inter-Asterisk eXchange 2 protocol). From the experiments, it was found that bandwidth and CPU usage were significantly affected by the addition of CIA. In ranking the three security methods in terms of these two resources, it was found that SRTP incurs the least bandwidth overhead, followed by SIAX2 and then IPSec. Where CPU utilisation is concerned, it was found that SIAX2 incurs the least overhead, followed by IPSec, and then SRTP.
- Full Text:
- Date Issued: 2007
A framework for information security governance in SMMEs
- Authors: Coertze, Jacques Jacobus
- Date: 2012
- Subjects: Business -- Data processing -- Security measures , Management information systems -- Security measures , Computer networks -- Security measures
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9810 , http://hdl.handle.net/10948/d1014083
- Description: It has been found that many small, medium and micro-sized enterprises (SMMEs) do not comply with sound information security governance principles, specifically the principles involved in drafting information security policies and monitoring compliance, mainly as a result of restricted resources and expertise. Research suggests that this problem occurs worldwide and that the impact it has on SMMEs is great. The problem is further compounded by the fact that, in our modern-day information technology environment, many larger organisations are providing SMMEs with access to their networks. This results not only in SMMEs being exposed to security risks, but the larger organisations as well. In previous research an information security management framework and toolbox was developed to assist SMMEs in drafting information security policies. Although this research was of some help to SMMEs, further research has shown that an even greater problem exists with the governance of information security as a result of the advancements that have been identified in information security literature. The aim of this dissertation is therefore to establish an information security governance framework that requires minimal effort and little expertise to alleviate governance problems. It is believed that such a framework would be useful for SMMEs and would result in the improved implementation of information security governance.
- Full Text:
- Date Issued: 2012
- Authors: Coertze, Jacques Jacobus
- Date: 2012
- Subjects: Business -- Data processing -- Security measures , Management information systems -- Security measures , Computer networks -- Security measures
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9810 , http://hdl.handle.net/10948/d1014083
- Description: It has been found that many small, medium and micro-sized enterprises (SMMEs) do not comply with sound information security governance principles, specifically the principles involved in drafting information security policies and monitoring compliance, mainly as a result of restricted resources and expertise. Research suggests that this problem occurs worldwide and that the impact it has on SMMEs is great. The problem is further compounded by the fact that, in our modern-day information technology environment, many larger organisations are providing SMMEs with access to their networks. This results not only in SMMEs being exposed to security risks, but the larger organisations as well. In previous research an information security management framework and toolbox was developed to assist SMMEs in drafting information security policies. Although this research was of some help to SMMEs, further research has shown that an even greater problem exists with the governance of information security as a result of the advancements that have been identified in information security literature. The aim of this dissertation is therefore to establish an information security governance framework that requires minimal effort and little expertise to alleviate governance problems. It is believed that such a framework would be useful for SMMEs and would result in the improved implementation of information security governance.
- Full Text:
- Date Issued: 2012
Towards a user centric model for identity and access management within the online environment
- Authors: Deas, Matthew Burns
- Date: 2008
- Subjects: Computers -- Access control , Computer networks -- Security measures
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9780 , http://hdl.handle.net/10948/775 , Computers -- Access control , Computer networks -- Security measures
- Description: Today, one is expected to remember multiple user names and passwords for different domains when one wants to access on the Internet. Identity management seeks to solve this problem through creating a digital identity that is exchangeable across organisational boundaries. Through the setup of collaboration agreements between multiple domains, users can easily switch across domains without being required to sign in again. However, use of this technology comes with risks of user identity and personal information being compromised. Criminals make use of spoofed websites and social engineering techniques to gain illegal access to user information. Due to this, the need for users to be protected from online threats has increased. Two processes are required to protect the user login information at the time of sign-on. Firstly, user’s information must be protected at the time of sign-on, and secondly, a simple method for the identification of the website is required by the user. This treatise looks at the process for identifying and verifying user information, and how the user can verify the system at sign-in. Three models for identity management are analysed, namely the Microsoft .NET Passport, Liberty Alliance Federated Identity for Single Sign-on and the Mozilla TrustBar for system authentication.
- Full Text:
- Date Issued: 2008
- Authors: Deas, Matthew Burns
- Date: 2008
- Subjects: Computers -- Access control , Computer networks -- Security measures
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9780 , http://hdl.handle.net/10948/775 , Computers -- Access control , Computer networks -- Security measures
- Description: Today, one is expected to remember multiple user names and passwords for different domains when one wants to access on the Internet. Identity management seeks to solve this problem through creating a digital identity that is exchangeable across organisational boundaries. Through the setup of collaboration agreements between multiple domains, users can easily switch across domains without being required to sign in again. However, use of this technology comes with risks of user identity and personal information being compromised. Criminals make use of spoofed websites and social engineering techniques to gain illegal access to user information. Due to this, the need for users to be protected from online threats has increased. Two processes are required to protect the user login information at the time of sign-on. Firstly, user’s information must be protected at the time of sign-on, and secondly, a simple method for the identification of the website is required by the user. This treatise looks at the process for identifying and verifying user information, and how the user can verify the system at sign-in. Three models for identity management are analysed, namely the Microsoft .NET Passport, Liberty Alliance Federated Identity for Single Sign-on and the Mozilla TrustBar for system authentication.
- Full Text:
- Date Issued: 2008
A framework towards effective control in information security governance
- Authors: Viljoen, Melanie
- Date: 2009
- Subjects: Data protection , Computer networks -- Security measures , Electronic data processing departments -- Security measures
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9773 , http://hdl.handle.net/10948/887 , Data protection , Computer networks -- Security measures , Electronic data processing departments -- Security measures
- Description: The importance of information in business today has made the need to properly secure this asset evident. Information security has become a responsibility for all managers of an organization. To better support more efficient management of information security, timely information security management information should be made available to all managers. Smaller organizations face special challenges with regard to information security management and reporting due to limited resources (Ross, 2008). This dissertation discusses a Framework for Information Security Management Information (FISMI) that aims to improve the visibility and contribute to better management of information security throughout an organization by enabling the provision of summarized, comprehensive information security management information to all managers in an affordable manner.
- Full Text:
- Date Issued: 2009
- Authors: Viljoen, Melanie
- Date: 2009
- Subjects: Data protection , Computer networks -- Security measures , Electronic data processing departments -- Security measures
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9773 , http://hdl.handle.net/10948/887 , Data protection , Computer networks -- Security measures , Electronic data processing departments -- Security measures
- Description: The importance of information in business today has made the need to properly secure this asset evident. Information security has become a responsibility for all managers of an organization. To better support more efficient management of information security, timely information security management information should be made available to all managers. Smaller organizations face special challenges with regard to information security management and reporting due to limited resources (Ross, 2008). This dissertation discusses a Framework for Information Security Management Information (FISMI) that aims to improve the visibility and contribute to better management of information security throughout an organization by enabling the provision of summarized, comprehensive information security management information to all managers in an affordable manner.
- Full Text:
- Date Issued: 2009