A cyclic approach to business continuity planning
- Authors: Botha, Jacques
- Date: 2002
- Subjects: Data recovery (Computer science) -- Planning , Data protection , Emergency management
- Language: English
- Type: Thesis , Masters , MTech (Information Technology)
- Identifier: vital:10788 , http://hdl.handle.net/10948/81 , Data recovery (Computer science) -- Planning , Data protection , Emergency management
- Description: The Information Technology (IT) industry has grown and has become an integral part in the world of business today. The importance of information, and IT in particular, will in fact only increase with time (von Solms, 1999). For a large group of organizations computer systems form the basis of their day-to-day functioning (Halliday, Badendorst & von Solms, 1996). These systems evolve at an incredible pace and this brings about a greater need for securing them, as well as the organizational information processed, transmitted and stored. This technological evolution brings about new risks for an organization’s systems and information (Halliday et. al., 1996). If IT fails, it means that the business could fail as well, creating a need for more rigorous IT management (International Business Machines Corporation, 2000). For this reason, executive management must be made aware of the potential consequences that a disaster could have on the organisation (Hawkins,Yen & Chou, 2000). A disaster could be any event that would cause a disruption in the normal day-to-day functioning of an organization. Such an event could range from a natural disaster, like a fire, an earthquake or a flood, to something more trivial, like a virus or system malfunction (Hawkins et. al., 2000). During the 1980’s a discipline known as Disaster Recovery Planning (DRP) emerged to protect an organization’s data centre, which was central to the organisation’s IT based structure, from the effects of disasters. This solution, however, focussed only on the protection of the data centre. During the early 1990’s the focus shifted towards distributed computing and client/server technology. Data centre protection and recovery were no longer enough to ensure survival. Organizations needed to ensure the continuation of their mission critical processes to support their continued goal of operations (IBM Global Services, 1999). Organizations now had to ensure that their mission critical functions could continue while the data centre was recovering from a disaster. A different approach was required. It is for this reason that Business Continuity Planning (BCP) was accepted as a formal discipline (IBM Global Services, 1999). To ensure that business continues as usual, an organization must have a plan in place that will help them ensure both the continuation and recovery of critical business processes and the recovery of the data centre, should a disaster strike (Moore, 1995). Wilson (2000) defines a business continuity plan as “a set of procedures developed for the entire enterprise, outlining the actions to be taken by the IT organization, executive staff, and the various business units in order to quickly resume operations in the event of a service interruption or an outage”. With markets being highly competitive as they are, an organization needs a detailed listing of steps to follow to ensure minimal loss due to downtime. This is very important for maintaining its competitive advantage and public stature (Wilson, 2000). The fact that the company’s reputation is at stake requires executive management to take continuity planning very serious (IBM Global Services, 1999). Ensuring continuity of business processes and recovering the IT services of an organization is not the sole responsibility of the IT department. Therefore management should be aware that they could be held liable for any consequences resulting from a disaster (Kearvell-White, 1996). Having a business continuity plan in place is important to the entire organization, as everyone, from executive management to the employees, stands to benefit from it (IBM Global Services, 1999). Despite this, numerous organizations do not have a business continuity plan in place. Organizations neglecting to develop a plan put themselves at tremendous risk and stand to loose everything (Kearvell-White, 1996).
- Full Text:
- Date Issued: 2002
- Authors: Botha, Jacques
- Date: 2002
- Subjects: Data recovery (Computer science) -- Planning , Data protection , Emergency management
- Language: English
- Type: Thesis , Masters , MTech (Information Technology)
- Identifier: vital:10788 , http://hdl.handle.net/10948/81 , Data recovery (Computer science) -- Planning , Data protection , Emergency management
- Description: The Information Technology (IT) industry has grown and has become an integral part in the world of business today. The importance of information, and IT in particular, will in fact only increase with time (von Solms, 1999). For a large group of organizations computer systems form the basis of their day-to-day functioning (Halliday, Badendorst & von Solms, 1996). These systems evolve at an incredible pace and this brings about a greater need for securing them, as well as the organizational information processed, transmitted and stored. This technological evolution brings about new risks for an organization’s systems and information (Halliday et. al., 1996). If IT fails, it means that the business could fail as well, creating a need for more rigorous IT management (International Business Machines Corporation, 2000). For this reason, executive management must be made aware of the potential consequences that a disaster could have on the organisation (Hawkins,Yen & Chou, 2000). A disaster could be any event that would cause a disruption in the normal day-to-day functioning of an organization. Such an event could range from a natural disaster, like a fire, an earthquake or a flood, to something more trivial, like a virus or system malfunction (Hawkins et. al., 2000). During the 1980’s a discipline known as Disaster Recovery Planning (DRP) emerged to protect an organization’s data centre, which was central to the organisation’s IT based structure, from the effects of disasters. This solution, however, focussed only on the protection of the data centre. During the early 1990’s the focus shifted towards distributed computing and client/server technology. Data centre protection and recovery were no longer enough to ensure survival. Organizations needed to ensure the continuation of their mission critical processes to support their continued goal of operations (IBM Global Services, 1999). Organizations now had to ensure that their mission critical functions could continue while the data centre was recovering from a disaster. A different approach was required. It is for this reason that Business Continuity Planning (BCP) was accepted as a formal discipline (IBM Global Services, 1999). To ensure that business continues as usual, an organization must have a plan in place that will help them ensure both the continuation and recovery of critical business processes and the recovery of the data centre, should a disaster strike (Moore, 1995). Wilson (2000) defines a business continuity plan as “a set of procedures developed for the entire enterprise, outlining the actions to be taken by the IT organization, executive staff, and the various business units in order to quickly resume operations in the event of a service interruption or an outage”. With markets being highly competitive as they are, an organization needs a detailed listing of steps to follow to ensure minimal loss due to downtime. This is very important for maintaining its competitive advantage and public stature (Wilson, 2000). The fact that the company’s reputation is at stake requires executive management to take continuity planning very serious (IBM Global Services, 1999). Ensuring continuity of business processes and recovering the IT services of an organization is not the sole responsibility of the IT department. Therefore management should be aware that they could be held liable for any consequences resulting from a disaster (Kearvell-White, 1996). Having a business continuity plan in place is important to the entire organization, as everyone, from executive management to the employees, stands to benefit from it (IBM Global Services, 1999). Despite this, numerous organizations do not have a business continuity plan in place. Organizations neglecting to develop a plan put themselves at tremendous risk and stand to loose everything (Kearvell-White, 1996).
- Full Text:
- Date Issued: 2002
An analysis of the use of DNS for malicious payload distribution
- Authors: Dube, Ishmael
- Date: 2019
- Subjects: Internet domain names , Computer networks -- Security measures , Computer security , Computer network protocols , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/97531 , vital:31447
- Description: The Domain Name System (DNS) protocol is a fundamental part of Internet activities that can be abused by cybercriminals to conduct malicious activities. Previous research has shown that cybercriminals use different methods, including the DNS protocol, to distribute malicious content, remain hidden and avoid detection from various technologies that are put in place to detect anomalies. This allows botnets and certain malware families to establish covert communication channels that can be used to send or receive data and also distribute malicious payloads using the DNS queries and responses. Cybercriminals use the DNS to breach highly protected networks, distribute malicious content, and exfiltrate sensitive information without being detected by security controls put in place by embedding certain strings in DNS packets. This research undertaking broadens this research field and fills in the existing research gap by extending the analysis of DNS being used as a payload distribution channel to detection of domains that are used to distribute different malicious payloads. This research undertaking analysed the use of the DNS in detecting domains and channels that are used for distributing malicious payloads. Passive DNS data which replicate DNS queries on name servers to detect anomalies in DNS queries was evaluated and analysed in order to detect malicious payloads. The research characterises the malicious payload distribution channels by analysing passive DNS traffic and modelling the DNS query and response patterns. The research found that it is possible to detect malicious payload distribution channels through the analysis of DNS TXT resource records.
- Full Text:
- Date Issued: 2019
- Authors: Dube, Ishmael
- Date: 2019
- Subjects: Internet domain names , Computer networks -- Security measures , Computer security , Computer network protocols , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/97531 , vital:31447
- Description: The Domain Name System (DNS) protocol is a fundamental part of Internet activities that can be abused by cybercriminals to conduct malicious activities. Previous research has shown that cybercriminals use different methods, including the DNS protocol, to distribute malicious content, remain hidden and avoid detection from various technologies that are put in place to detect anomalies. This allows botnets and certain malware families to establish covert communication channels that can be used to send or receive data and also distribute malicious payloads using the DNS queries and responses. Cybercriminals use the DNS to breach highly protected networks, distribute malicious content, and exfiltrate sensitive information without being detected by security controls put in place by embedding certain strings in DNS packets. This research undertaking broadens this research field and fills in the existing research gap by extending the analysis of DNS being used as a payload distribution channel to detection of domains that are used to distribute different malicious payloads. This research undertaking analysed the use of the DNS in detecting domains and channels that are used for distributing malicious payloads. Passive DNS data which replicate DNS queries on name servers to detect anomalies in DNS queries was evaluated and analysed in order to detect malicious payloads. The research characterises the malicious payload distribution channels by analysing passive DNS traffic and modelling the DNS query and response patterns. The research found that it is possible to detect malicious payload distribution channels through the analysis of DNS TXT resource records.
- Full Text:
- Date Issued: 2019
Evaluating the cyber security skills gap relating to penetration testing
- Authors: Beukes, Dirk Johannes
- Date: 2021
- Subjects: Computer networks -- Security measures , Computer networks -- Monitoring , Computer networks -- Management , Data protection , Information technology -- Security measures , Professionals -- Supply and demand , Electronic data personnel -- Supply and demand
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/171120 , vital:42021
- Description: Information Technology (IT) is growing rapidly and has become an integral part of daily life. It provides a boundless list of services and opportunities, generating boundless sources of information, which could be abused or exploited. Due to this growth, there are thousands of new users added to the grid using computer systems in a static and mobile environment; this fact alone creates endless volumes of data to be exploited and hardware devices to be abused by the wrong people. The growth in the IT environment adds challenges that may affect users in their personal, professional, and business lives. There are constant threats on corporate and private computer networks and computer systems. In the corporate environment companies try to eliminate the threat by testing networks making use of penetration tests and by implementing cyber awareness programs to make employees more aware of the cyber threat. Penetration tests and vulnerability assessments are undervalued; are seen as a formality and are not used to increase system security. If used regularly the computer system will be more secure and attacks minimized. With the growth in technology, industries all over the globe become fully dependent on information systems in doing their day-to-day business. As technology evolves and new technology becomes available, the bigger the risk becomes to protect against the dangers which come with this new technology. For industry to protect itself against this growth in technology, personnel with a certain skill set is needed. This is where cyber security plays a very important role in the protection of information systems to ensure the confidentiality, integrity and availability of the information system itself and the data on the system. Due to this drive to secure information systems, the need for cyber security by professionals is on the rise as well. It is estimated that there is a shortage of one million cyber security professionals globally. What is the reason for this skills shortage? Will it be possible to close this skills shortage gap? This study is about identifying the skills gap and identifying possible ways to close this skills gap. In this study, research was conducted on the cyber security international standards, cyber security training at universities and international certification focusing specifically on penetration testing, the evaluation of the need of industry while recruiting new penetration testers, finishing with suggestions on how to fill possible gaps in the skills market with a conclusion.
- Full Text:
- Date Issued: 2021
- Authors: Beukes, Dirk Johannes
- Date: 2021
- Subjects: Computer networks -- Security measures , Computer networks -- Monitoring , Computer networks -- Management , Data protection , Information technology -- Security measures , Professionals -- Supply and demand , Electronic data personnel -- Supply and demand
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/171120 , vital:42021
- Description: Information Technology (IT) is growing rapidly and has become an integral part of daily life. It provides a boundless list of services and opportunities, generating boundless sources of information, which could be abused or exploited. Due to this growth, there are thousands of new users added to the grid using computer systems in a static and mobile environment; this fact alone creates endless volumes of data to be exploited and hardware devices to be abused by the wrong people. The growth in the IT environment adds challenges that may affect users in their personal, professional, and business lives. There are constant threats on corporate and private computer networks and computer systems. In the corporate environment companies try to eliminate the threat by testing networks making use of penetration tests and by implementing cyber awareness programs to make employees more aware of the cyber threat. Penetration tests and vulnerability assessments are undervalued; are seen as a formality and are not used to increase system security. If used regularly the computer system will be more secure and attacks minimized. With the growth in technology, industries all over the globe become fully dependent on information systems in doing their day-to-day business. As technology evolves and new technology becomes available, the bigger the risk becomes to protect against the dangers which come with this new technology. For industry to protect itself against this growth in technology, personnel with a certain skill set is needed. This is where cyber security plays a very important role in the protection of information systems to ensure the confidentiality, integrity and availability of the information system itself and the data on the system. Due to this drive to secure information systems, the need for cyber security by professionals is on the rise as well. It is estimated that there is a shortage of one million cyber security professionals globally. What is the reason for this skills shortage? Will it be possible to close this skills shortage gap? This study is about identifying the skills gap and identifying possible ways to close this skills gap. In this study, research was conducted on the cyber security international standards, cyber security training at universities and international certification focusing specifically on penetration testing, the evaluation of the need of industry while recruiting new penetration testers, finishing with suggestions on how to fill possible gaps in the skills market with a conclusion.
- Full Text:
- Date Issued: 2021
SIDVI: a model for secure distributed data integration
- Authors: Routly, Wayne A
- Date: 2004
- Subjects: Data protection
- Language: English
- Type: Thesis , Masters , MTech (Information Technology)
- Identifier: vital:10805 , http://hdl.handle.net/10948/261 , Data protection
- Description: The new millennium has brought about an increase in the use of business intelligence and knowledge management systems. The very foundations of these systems are the multitude of source databases that store the data. The ability to derive information from these databases is brought about by means of data integration. With the current emphasis on security in all walks of information and communication technology, a renewed interest must be placed in the systems that provide us with information; data integration systems. This dissertation investigates security issues at specific stages in the data integration cycle, with special reference to problems when performing data integration in a peer-topeer environment, as in distributed data integration. In the database environment we are concerned with the database itself and the media used to connect to and from the database. In distributed data integration, the concept of the database is redefined to the source database, from which we extract data and the storage database in which the integrated data is stored. This postulates three distinct areas in which to apply security, the data source, the network medium and the data store. All of these areas encompass data integration and must be considered holistically when implementing security. Data integration is never only one server or one database; it is various geographically dispersed components working together towards a common goal. It is important then that we consider all aspects involved when attempting to provide security for data integration. This dissertation will focus on the areas of security threats and investigates a model to ensure the integrity and security of data during the entire integration process. In order to ensure effective security in a data integration environment, that security, should be present at all stages, it should provide for end-to-end protection.
- Full Text:
- Date Issued: 2004
- Authors: Routly, Wayne A
- Date: 2004
- Subjects: Data protection
- Language: English
- Type: Thesis , Masters , MTech (Information Technology)
- Identifier: vital:10805 , http://hdl.handle.net/10948/261 , Data protection
- Description: The new millennium has brought about an increase in the use of business intelligence and knowledge management systems. The very foundations of these systems are the multitude of source databases that store the data. The ability to derive information from these databases is brought about by means of data integration. With the current emphasis on security in all walks of information and communication technology, a renewed interest must be placed in the systems that provide us with information; data integration systems. This dissertation investigates security issues at specific stages in the data integration cycle, with special reference to problems when performing data integration in a peer-topeer environment, as in distributed data integration. In the database environment we are concerned with the database itself and the media used to connect to and from the database. In distributed data integration, the concept of the database is redefined to the source database, from which we extract data and the storage database in which the integrated data is stored. This postulates three distinct areas in which to apply security, the data source, the network medium and the data store. All of these areas encompass data integration and must be considered holistically when implementing security. Data integration is never only one server or one database; it is various geographically dispersed components working together towards a common goal. It is important then that we consider all aspects involved when attempting to provide security for data integration. This dissertation will focus on the areas of security threats and investigates a model to ensure the integrity and security of data during the entire integration process. In order to ensure effective security in a data integration environment, that security, should be present at all stages, it should provide for end-to-end protection.
- Full Text:
- Date Issued: 2004
Information security awareness: generic content, tools and techniques
- Authors: Mauwa, Hope
- Date: 2007
- Subjects: Computer security , Data protection , Computers -- Safety measures , Information technology -- Security measures
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9733 , http://hdl.handle.net/10948/560 , Computer security , Data protection , Computers -- Safety measures , Information technology -- Security measures
- Description: In today’s computing environment, awareness programmes play a much more important role in organizations’ complete information security programmes. Information security awareness programmes are there to change behaviour or reinforce good security practices, and provide a baseline of security knowledge for all information users. Security awareness is a learning process, which changes individual and organizational attitudes and perceptions so that the importance of security and the adverse consequences of its failure are realized. Therefore, with proper awareness, employees become the most effective layer in an organization’s security defence. With the important role that these awareness programmes play in organizations’ complete information security programmes, it is a must that all organizations that are serious about information security must implement it. But though awareness programmes have become increasing important, the level of awareness in most organizations is still low. It seems that the current approach of developing these programmes does not satisfy the needs of most organizations. Therefore, another approach, which tries to meet the needs of most organizations, is proposed in this project as part of the solution of raising the level of awareness programmes in organizations.
- Full Text:
- Date Issued: 2007
- Authors: Mauwa, Hope
- Date: 2007
- Subjects: Computer security , Data protection , Computers -- Safety measures , Information technology -- Security measures
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9733 , http://hdl.handle.net/10948/560 , Computer security , Data protection , Computers -- Safety measures , Information technology -- Security measures
- Description: In today’s computing environment, awareness programmes play a much more important role in organizations’ complete information security programmes. Information security awareness programmes are there to change behaviour or reinforce good security practices, and provide a baseline of security knowledge for all information users. Security awareness is a learning process, which changes individual and organizational attitudes and perceptions so that the importance of security and the adverse consequences of its failure are realized. Therefore, with proper awareness, employees become the most effective layer in an organization’s security defence. With the important role that these awareness programmes play in organizations’ complete information security programmes, it is a must that all organizations that are serious about information security must implement it. But though awareness programmes have become increasing important, the level of awareness in most organizations is still low. It seems that the current approach of developing these programmes does not satisfy the needs of most organizations. Therefore, another approach, which tries to meet the needs of most organizations, is proposed in this project as part of the solution of raising the level of awareness programmes in organizations.
- Full Text:
- Date Issued: 2007
Managing an information security policy architecture : a technical documentation perspective
- Maninjwa, Prosecutor Mvikeli
- Authors: Maninjwa, Prosecutor Mvikeli
- Date: 2012
- Subjects: Computer security -- Management , Computer architecture , Data protection
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9825 , http://hdl.handle.net/10948/d1020757
- Description: Information and the related assets form critical business assets for most organizations. Organizations depend on their information assets to survive and to remain competitive. However, the organization’s information assets are faced with a number of internal and external threats, aimed at compromising the confidentiality, integrity and/or availability (CIA) of information assets. These threats can be of physical, technical, or operational nature. For an organization to successfully conduct its business operations, information assets should always be protected from these threats. The process of protecting information and its related assets, ensuring the CIA thereof, is referred to as information security. To be effective, information security should be viewed as critical to the overall success of the organization, and therefore be included as one of the organization’s Corporate Governance sub-functions, referred to as Information Security Governance. Information Security Governance is the strategic system for directing and controlling the organization’s information security initiatives. Directing is the process whereby management issues directives, giving a strategic direction for information security within an organization. Controlling is the process of ensuring that management directives are being adhered to within an organization. To be effective, Information Security Governance directing and controlling depend on the organization’s Information Security Policy Architecture. An Information Security Policy Architecture is a hierarchical representation of the various information security policies and related documentation that an organization has used. When directing, management directives should be issued in the form of an Information Security Policy Architecture, and controlling should ensure adherence to the Information Security Policy Architecture. However, this study noted that in both literature and organizational practices, Information Security Policy Architectures are not comprehensively addressed and adequately managed. Therefore, this study argues towards a more comprehensive Information Security Policy Architecture, and the proper management thereof.
- Full Text:
- Date Issued: 2012
- Authors: Maninjwa, Prosecutor Mvikeli
- Date: 2012
- Subjects: Computer security -- Management , Computer architecture , Data protection
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9825 , http://hdl.handle.net/10948/d1020757
- Description: Information and the related assets form critical business assets for most organizations. Organizations depend on their information assets to survive and to remain competitive. However, the organization’s information assets are faced with a number of internal and external threats, aimed at compromising the confidentiality, integrity and/or availability (CIA) of information assets. These threats can be of physical, technical, or operational nature. For an organization to successfully conduct its business operations, information assets should always be protected from these threats. The process of protecting information and its related assets, ensuring the CIA thereof, is referred to as information security. To be effective, information security should be viewed as critical to the overall success of the organization, and therefore be included as one of the organization’s Corporate Governance sub-functions, referred to as Information Security Governance. Information Security Governance is the strategic system for directing and controlling the organization’s information security initiatives. Directing is the process whereby management issues directives, giving a strategic direction for information security within an organization. Controlling is the process of ensuring that management directives are being adhered to within an organization. To be effective, Information Security Governance directing and controlling depend on the organization’s Information Security Policy Architecture. An Information Security Policy Architecture is a hierarchical representation of the various information security policies and related documentation that an organization has used. When directing, management directives should be issued in the form of an Information Security Policy Architecture, and controlling should ensure adherence to the Information Security Policy Architecture. However, this study noted that in both literature and organizational practices, Information Security Policy Architectures are not comprehensively addressed and adequately managed. Therefore, this study argues towards a more comprehensive Information Security Policy Architecture, and the proper management thereof.
- Full Text:
- Date Issued: 2012
Targeted attack detection by means of free and open source solutions
- Authors: Bernardo, Louis F
- Date: 2019
- Subjects: Computer networks -- Security measures , Information technology -- Security measures , Computer security -- Management , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92269 , vital:30703
- Description: Compliance requirements are part of everyday business requirements for various areas, such as retail and medical services. As part of compliance it may be required to have infrastructure in place to monitor the activities in the environment to ensure that the relevant data and environment is sufficiently protected. At the core of such monitoring solutions one would find some type of data repository, or database, to store and ultimately correlate the captured events. Such solutions are commonly called Security Information and Event Management, or SIEM for short. Larger companies have been known to use commercial solutions such as IBM's Qradar, Logrythm, or Splunk. However, these come at significant cost and arent suitable for smaller businesses with limited budgets. These solutions require manual configuration of event correlation for detection of activities that place the environment in danger. This usually requires vendor implementation assistance that also would come at a cost. Alternatively, there are open source solutions that provide the required functionality. This research will demonstrate building an open source solution, with minimal to no cost for hardware or software, while still maintaining the capability of detecting targeted attacks. The solution presented in this research includes Wazuh, which is a combination of OSSEC and the ELK stack, integrated with an Network Intrusion Detection System (NIDS). The success of the integration, is determined by measuring postive attack detection based on each different configuration options. To perform the testing, a deliberately vulnerable platform named Metasploitable will be used as a victim host. The victim host vulnerabilities were created specifically to serve as target for Metasploit. The attacks were generated by utilising Metasploit Framework on a prebuilt Kali Linux host.
- Full Text:
- Date Issued: 2019
- Authors: Bernardo, Louis F
- Date: 2019
- Subjects: Computer networks -- Security measures , Information technology -- Security measures , Computer security -- Management , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92269 , vital:30703
- Description: Compliance requirements are part of everyday business requirements for various areas, such as retail and medical services. As part of compliance it may be required to have infrastructure in place to monitor the activities in the environment to ensure that the relevant data and environment is sufficiently protected. At the core of such monitoring solutions one would find some type of data repository, or database, to store and ultimately correlate the captured events. Such solutions are commonly called Security Information and Event Management, or SIEM for short. Larger companies have been known to use commercial solutions such as IBM's Qradar, Logrythm, or Splunk. However, these come at significant cost and arent suitable for smaller businesses with limited budgets. These solutions require manual configuration of event correlation for detection of activities that place the environment in danger. This usually requires vendor implementation assistance that also would come at a cost. Alternatively, there are open source solutions that provide the required functionality. This research will demonstrate building an open source solution, with minimal to no cost for hardware or software, while still maintaining the capability of detecting targeted attacks. The solution presented in this research includes Wazuh, which is a combination of OSSEC and the ELK stack, integrated with an Network Intrusion Detection System (NIDS). The success of the integration, is determined by measuring postive attack detection based on each different configuration options. To perform the testing, a deliberately vulnerable platform named Metasploitable will be used as a victim host. The victim host vulnerabilities were created specifically to serve as target for Metasploit. The attacks were generated by utilising Metasploit Framework on a prebuilt Kali Linux host.
- Full Text:
- Date Issued: 2019
A model for cultivating resistance to social engineering attacks
- Authors: Jansson, Kenny
- Date: 2011
- Subjects: Computer security , Data protection , Human-computer interaction
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9744 , http://hdl.handle.net/10948/1588 , Computer security , Data protection , Human-computer interaction
- Description: The human being is commonly considered as being the weakest link in information security. Subsequently, as information is one of the most critical assets in an organization today, it is essential that the human element is considered in deployments of information security countermeasures. However, the human element is often neglected in this regard. Consequently, many criminals are now targeting the user directly to obtain sensitive information instead of spending days or even months trying to hack through systems. Some criminals are targeting users by utilizing various social engineering techniques to deceive the user into disclosing information. For this reason, the users of the Internet and ICT-related technologies are nowadays very vulnerable to various social engineering attacks. As a contribution to increase users’ social engineering awareness, a model – called SERUM – was devised. SERUM aims to cultivate social engineering resistance within a community through exposing the users of the community to ‘fake’ social engineering attacks. The users that react incorrectly to these attacks are instantly notified and requested to participate in an online social engineering awareness program. Thus, users are educated on-demand. The model was implemented as a software system and was utilized to conduct a phishing exercise on all the students of the Nelson Mandela Metropolitan University. The aim of the phishing exercise was to determine whether SERUM is effective in cultivating social engineering resistant behaviour within a community. This phishing exercise proved to be successful and positive results emanated. This indicated that a model like SERUM can indeed be used to educate users regarding phishing attacks.
- Full Text:
- Date Issued: 2011
- Authors: Jansson, Kenny
- Date: 2011
- Subjects: Computer security , Data protection , Human-computer interaction
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9744 , http://hdl.handle.net/10948/1588 , Computer security , Data protection , Human-computer interaction
- Description: The human being is commonly considered as being the weakest link in information security. Subsequently, as information is one of the most critical assets in an organization today, it is essential that the human element is considered in deployments of information security countermeasures. However, the human element is often neglected in this regard. Consequently, many criminals are now targeting the user directly to obtain sensitive information instead of spending days or even months trying to hack through systems. Some criminals are targeting users by utilizing various social engineering techniques to deceive the user into disclosing information. For this reason, the users of the Internet and ICT-related technologies are nowadays very vulnerable to various social engineering attacks. As a contribution to increase users’ social engineering awareness, a model – called SERUM – was devised. SERUM aims to cultivate social engineering resistance within a community through exposing the users of the community to ‘fake’ social engineering attacks. The users that react incorrectly to these attacks are instantly notified and requested to participate in an online social engineering awareness program. Thus, users are educated on-demand. The model was implemented as a software system and was utilized to conduct a phishing exercise on all the students of the Nelson Mandela Metropolitan University. The aim of the phishing exercise was to determine whether SERUM is effective in cultivating social engineering resistant behaviour within a community. This phishing exercise proved to be successful and positive results emanated. This indicated that a model like SERUM can indeed be used to educate users regarding phishing attacks.
- Full Text:
- Date Issued: 2011
A framework for information security management in local government
- Authors: De Lange, Joshua
- Date: 2017
- Subjects: Computer security -- Management , Data protection
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: http://hdl.handle.net/10948/7588 , vital:21932
- Description: Information has become so pervasive within enterprises and everyday life, that it is almost indispensable. This is clear as information has become core to the business operations of any enterprise. Information and communication technology (ICT) systems are heavily relied upon to store, process and transmit this valuable commodity. Due to its immense value, information and related ICT resources have to be adequately protected. This protection of information is commonly referred to as information security.
- Full Text:
- Date Issued: 2017
- Authors: De Lange, Joshua
- Date: 2017
- Subjects: Computer security -- Management , Data protection
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: http://hdl.handle.net/10948/7588 , vital:21932
- Description: Information has become so pervasive within enterprises and everyday life, that it is almost indispensable. This is clear as information has become core to the business operations of any enterprise. Information and communication technology (ICT) systems are heavily relied upon to store, process and transmit this valuable commodity. Due to its immense value, information and related ICT resources have to be adequately protected. This protection of information is commonly referred to as information security.
- Full Text:
- Date Issued: 2017
An investigation of the security of passwords derived from African languages
- Authors: Sishi, Sibusiso Teboho
- Date: 2019
- Subjects: Computers -- Access control -- Passwords , Computer users -- Attitudes , Internet -- Access control , Internet -- Security measures , Internet -- Management , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/163273 , vital:41024
- Description: Password authentication has become ubiquitous in the cyber age. To-date, there have been several studies on country based passwords by authors who studied, amongst others, English, Finnish, Italian and Chinese based passwords. However, there has been a lack of focused study on the type of passwords that are being created in Africa and whether there are benefits in creating passwords in an African language. For this research, password databases containing LAN Manager (LM) and NT LAN Manager (NTLM) hashes extracted from South African organisations in a variety of sectors in the economy, were obtained to gain an understanding of user behaviour in creating passwords. Analysis of the passwords obtained from these hashes (using several cracking methods) showed that many organisational passwords are based on the English language. This is understandable considering that the business language in South Africa is English even though South Africa has 11 official languages. African language based passwords were derived from known English weak passwords and some of the passwords were appended with numbers and special characters. The African based passwords created using eight Southern African languages were then uploaded to the Internet to test the security around using passwords based on African languages. Since most of the passwords were able to be cracked by third party researchers, we conclude that any password that is derived from known weak English words marked no improvement in the security of a password written in an African language, especially the more widely spoken languages, namely, isiZulu, isiXhosa and Setswana.
- Full Text:
- Date Issued: 2019
- Authors: Sishi, Sibusiso Teboho
- Date: 2019
- Subjects: Computers -- Access control -- Passwords , Computer users -- Attitudes , Internet -- Access control , Internet -- Security measures , Internet -- Management , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/163273 , vital:41024
- Description: Password authentication has become ubiquitous in the cyber age. To-date, there have been several studies on country based passwords by authors who studied, amongst others, English, Finnish, Italian and Chinese based passwords. However, there has been a lack of focused study on the type of passwords that are being created in Africa and whether there are benefits in creating passwords in an African language. For this research, password databases containing LAN Manager (LM) and NT LAN Manager (NTLM) hashes extracted from South African organisations in a variety of sectors in the economy, were obtained to gain an understanding of user behaviour in creating passwords. Analysis of the passwords obtained from these hashes (using several cracking methods) showed that many organisational passwords are based on the English language. This is understandable considering that the business language in South Africa is English even though South Africa has 11 official languages. African language based passwords were derived from known English weak passwords and some of the passwords were appended with numbers and special characters. The African based passwords created using eight Southern African languages were then uploaded to the Internet to test the security around using passwords based on African languages. Since most of the passwords were able to be cracked by third party researchers, we conclude that any password that is derived from known weak English words marked no improvement in the security of a password written in an African language, especially the more widely spoken languages, namely, isiZulu, isiXhosa and Setswana.
- Full Text:
- Date Issued: 2019
A study of South African computer users' password usage habits and attitude towards password security
- Authors: Friendman, Brandon
- Date: 2014
- Subjects: Computers -- Access control -- Passwords , Computer users -- Attitudes , Internet -- Access control , Internet -- Security measures , Internet -- Management , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: vital:4700
- Description: The challenge of having to create and remember a secure password for each user account has become a problem for many computer users and can lead to bad password management practices. Simpler and less secure passwords are often selected and are regularly reused across multiple user accounts. Computer users within corporations and institutions are subject to password policies, policies which require users to create passwords of a specified length and composition and change passwords regularly. These policies often prevent users from reusing previous selected passwords. Security vendors and professionals have sought to improve or even replace password authentication. Technologies such as multi-factor authentication and single sign-on have been developed to complement or even replace password authentication. The objective of the study was to investigate the password habits of South African computer and internet users. The aim was to assess their attitudes toward password security, to determine whether password policies a↵ect the manner in which they manage their passwords and to investigate their exposure to alternate authentication technologies. The results from the online survey demonstrated that password practices of the participants across their professional and personal contexts were generally insecure. Participants often used shorter, simpler and ultimately less secure passwords. Participants would try to memorise all of their passwords or reuse the same password on most of their accounts. Many participants had not received any security awareness training, and additional security technologies (such as multi-factor authentication or password managers) were seldom used or provided to them. The password policies encountered by the participants in their organisations did little towards encouraging the users to apply more secure password practices. Users lack the knowledge and understanding about password security as they had received little or no training pertaining to it.
- Full Text:
- Date Issued: 2014
- Authors: Friendman, Brandon
- Date: 2014
- Subjects: Computers -- Access control -- Passwords , Computer users -- Attitudes , Internet -- Access control , Internet -- Security measures , Internet -- Management , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: vital:4700
- Description: The challenge of having to create and remember a secure password for each user account has become a problem for many computer users and can lead to bad password management practices. Simpler and less secure passwords are often selected and are regularly reused across multiple user accounts. Computer users within corporations and institutions are subject to password policies, policies which require users to create passwords of a specified length and composition and change passwords regularly. These policies often prevent users from reusing previous selected passwords. Security vendors and professionals have sought to improve or even replace password authentication. Technologies such as multi-factor authentication and single sign-on have been developed to complement or even replace password authentication. The objective of the study was to investigate the password habits of South African computer and internet users. The aim was to assess their attitudes toward password security, to determine whether password policies a↵ect the manner in which they manage their passwords and to investigate their exposure to alternate authentication technologies. The results from the online survey demonstrated that password practices of the participants across their professional and personal contexts were generally insecure. Participants often used shorter, simpler and ultimately less secure passwords. Participants would try to memorise all of their passwords or reuse the same password on most of their accounts. Many participants had not received any security awareness training, and additional security technologies (such as multi-factor authentication or password managers) were seldom used or provided to them. The password policies encountered by the participants in their organisations did little towards encouraging the users to apply more secure password practices. Users lack the knowledge and understanding about password security as they had received little or no training pertaining to it.
- Full Text:
- Date Issued: 2014
WSP3: a web service model for personal privacy protection
- Authors: Ophoff, Jacobus Albertus
- Date: 2003
- Subjects: Data protection , Computer security , Privacy, Right of
- Language: English
- Type: Thesis , Masters , MTech (Information Technology)
- Identifier: vital:10798 , http://hdl.handle.net/10948/272 , Data protection , Computer security , Privacy, Right of
- Description: The prevalent use of the Internet not only brings with it numerous advantages, but also some drawbacks. The biggest of these problems is the threat to the individual’s personal privacy. This privacy issue is playing a growing role with respect to technological advancements. While new service-based technologies are considerably increasing the scope of information flow, the cost is a loss of control over personal information and therefore privacy. Existing privacy protection measures might fail to provide effective privacy protection in these new environments. This dissertation focuses on the use of new technologies to improve the levels of personal privacy. In this regard the WSP3 (Web Service Model for Personal Privacy Protection) model is formulated. This model proposes a privacy protection scheme using Web Services. Having received tremendous industry backing, Web Services is a very topical technology, promising much in the evolution of the Internet. In our society privacy is highly valued and a very important issue. Protecting personal privacy in environments using new technologies is crucial for their future success. These facts, combined with the detail that the WSP3 model focusses on Web Service environments, lead to the following realizations for the model: The WSP3 model provides users with control over their personal information and allows them to express their desired level of privacy. Parties requiring access to a user’s information are explicitly defined by the user, as well as the information available to them. The WSP3 model utilizes a Web Services architecture to provide privacy protection. In addition, it integrates security techniques, such as cryptography, into the architecture as required. The WSP3 model integrates with current standards to maintain their benefits. This allows the implementation of the model in any environment supporting these base technologies. In addition, the research involves the development of a prototype according to the model. This prototype serves to present a proof-of-concept by illustrating the WSP3 model and all the technologies involved. The WSP3 model gives users control over their privacy and allows everyone to decide their own level of protection. By incorporating Web Services, the model also shows how new technologies can be used to offer solutions to existing problem areas.
- Full Text:
- Date Issued: 2003
- Authors: Ophoff, Jacobus Albertus
- Date: 2003
- Subjects: Data protection , Computer security , Privacy, Right of
- Language: English
- Type: Thesis , Masters , MTech (Information Technology)
- Identifier: vital:10798 , http://hdl.handle.net/10948/272 , Data protection , Computer security , Privacy, Right of
- Description: The prevalent use of the Internet not only brings with it numerous advantages, but also some drawbacks. The biggest of these problems is the threat to the individual’s personal privacy. This privacy issue is playing a growing role with respect to technological advancements. While new service-based technologies are considerably increasing the scope of information flow, the cost is a loss of control over personal information and therefore privacy. Existing privacy protection measures might fail to provide effective privacy protection in these new environments. This dissertation focuses on the use of new technologies to improve the levels of personal privacy. In this regard the WSP3 (Web Service Model for Personal Privacy Protection) model is formulated. This model proposes a privacy protection scheme using Web Services. Having received tremendous industry backing, Web Services is a very topical technology, promising much in the evolution of the Internet. In our society privacy is highly valued and a very important issue. Protecting personal privacy in environments using new technologies is crucial for their future success. These facts, combined with the detail that the WSP3 model focusses on Web Service environments, lead to the following realizations for the model: The WSP3 model provides users with control over their personal information and allows them to express their desired level of privacy. Parties requiring access to a user’s information are explicitly defined by the user, as well as the information available to them. The WSP3 model utilizes a Web Services architecture to provide privacy protection. In addition, it integrates security techniques, such as cryptography, into the architecture as required. The WSP3 model integrates with current standards to maintain their benefits. This allows the implementation of the model in any environment supporting these base technologies. In addition, the research involves the development of a prototype according to the model. This prototype serves to present a proof-of-concept by illustrating the WSP3 model and all the technologies involved. The WSP3 model gives users control over their privacy and allows everyone to decide their own level of protection. By incorporating Web Services, the model also shows how new technologies can be used to offer solutions to existing problem areas.
- Full Text:
- Date Issued: 2003
Implementing the CoSaWoE models in a commercial workflow product
- Authors: Erwee, Carmen
- Date: 2005
- Subjects: Computers -- Access control , Workflow , Computer security , Data protection
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9732 , http://hdl.handle.net/10948/169 , Computers -- Access control , Workflow , Computer security , Data protection
- Description: Workflow systems have gained popularity not only as a research topic, but also as a key component of Enterprize Resource Planning packages and e- business. Comprehensive workflow products that automate intra- as well inter-organizational information flow are now available for commercial use. Standardization efforts have centered mostly around the interoperability of these systems, however a standard access control model have yet to be adopted. The research community has developed several models for access control to be included as part of workflow functionality. Commercial systems, however, are still implementing access control functionality in a proprietary manner. This dissertation investigates whether a comprehensive model for gain- ing context-sensitive access control, namely CoSAWoE, can be purposefully implemented in a commercial workflow product. Using methods such as an exploratory prototype, various aspects of the model was implemented to gain an understanding of the di±culties developers face when attempting to map the model to existing proprietary software. Oracle Workflow was chosen as an example of a commercial workflow product. An investigtion of the features of this product, together with the prototype, revealed the ability to affect access control in a similar manner to the model: by specifying access control constraints during administration and design, and then enforcing those constraints dynamically during run-time. However, only certain components within these two aspects of the model directly effected the commercial workflow product. It was argued that the first two requirements of context-sensitive access control, order of events and strict least privilege, addressed by the object design, role engineering and session control components of the model, can be simulated if such capabilities are not pertinently available as part of the product. As such, guidelines were provided for how this can be achieved in Oracle Workflow. However, most of the implementation effort focussed on the last requirement of context-sensitive access control, namely separation of duties. The CoSAWoE model proposes SoD administration steps that includes expressing various business rules through a set of conflicting entities which are maintained outside the scope of the workflow system. This component was implemented easily enough through tables which were created with a relational database. Evaluating these conflicts during run-time to control worklist generation proved more di±cult. First, a thorough understanding of the way in which workflow history is maintained was necessary. A re-usable function was developed to prune user lists according to user involvement in previous tasks in the workflow and the conflicts specified for those users and tasks. However, due to the lack of a central access control service, this re- usable function must be included in the appropriate places in the workflow process model. Furthermore, the dissertation utilized a practical example to develop a prototype. This prototype served a dual purpose: firstly, to aid the author's understanding of the features and principles involved, and secondly, to illustrate and explore the implementation of the model as described in the previous paragraphs. In conclusion the dissertation summarized the CoSAWoE model's compo- nents which were found to be product agnostic, directly or indirectly imple- mentable, or not implemented in the chosen workflow product. The lessons learnt and issues surrounding the implementation effort were also discussed before further research in terms of XML documents as data containers for the workfow process were suggested.
- Full Text:
- Date Issued: 2005
- Authors: Erwee, Carmen
- Date: 2005
- Subjects: Computers -- Access control , Workflow , Computer security , Data protection
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9732 , http://hdl.handle.net/10948/169 , Computers -- Access control , Workflow , Computer security , Data protection
- Description: Workflow systems have gained popularity not only as a research topic, but also as a key component of Enterprize Resource Planning packages and e- business. Comprehensive workflow products that automate intra- as well inter-organizational information flow are now available for commercial use. Standardization efforts have centered mostly around the interoperability of these systems, however a standard access control model have yet to be adopted. The research community has developed several models for access control to be included as part of workflow functionality. Commercial systems, however, are still implementing access control functionality in a proprietary manner. This dissertation investigates whether a comprehensive model for gain- ing context-sensitive access control, namely CoSAWoE, can be purposefully implemented in a commercial workflow product. Using methods such as an exploratory prototype, various aspects of the model was implemented to gain an understanding of the di±culties developers face when attempting to map the model to existing proprietary software. Oracle Workflow was chosen as an example of a commercial workflow product. An investigtion of the features of this product, together with the prototype, revealed the ability to affect access control in a similar manner to the model: by specifying access control constraints during administration and design, and then enforcing those constraints dynamically during run-time. However, only certain components within these two aspects of the model directly effected the commercial workflow product. It was argued that the first two requirements of context-sensitive access control, order of events and strict least privilege, addressed by the object design, role engineering and session control components of the model, can be simulated if such capabilities are not pertinently available as part of the product. As such, guidelines were provided for how this can be achieved in Oracle Workflow. However, most of the implementation effort focussed on the last requirement of context-sensitive access control, namely separation of duties. The CoSAWoE model proposes SoD administration steps that includes expressing various business rules through a set of conflicting entities which are maintained outside the scope of the workflow system. This component was implemented easily enough through tables which were created with a relational database. Evaluating these conflicts during run-time to control worklist generation proved more di±cult. First, a thorough understanding of the way in which workflow history is maintained was necessary. A re-usable function was developed to prune user lists according to user involvement in previous tasks in the workflow and the conflicts specified for those users and tasks. However, due to the lack of a central access control service, this re- usable function must be included in the appropriate places in the workflow process model. Furthermore, the dissertation utilized a practical example to develop a prototype. This prototype served a dual purpose: firstly, to aid the author's understanding of the features and principles involved, and secondly, to illustrate and explore the implementation of the model as described in the previous paragraphs. In conclusion the dissertation summarized the CoSAWoE model's compo- nents which were found to be product agnostic, directly or indirectly imple- mentable, or not implemented in the chosen workflow product. The lessons learnt and issues surrounding the implementation effort were also discussed before further research in terms of XML documents as data containers for the workfow process were suggested.
- Full Text:
- Date Issued: 2005
The cost of free instant messaging: an attack modelling perspective
- Authors: Du Preez, Riekert
- Date: 2006
- Subjects: Computer security , Instant messaging , Data protection
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9797 , http://hdl.handle.net/10948/499 , http://hdl.handle.net/10948/d1011921 , Computer security , Instant messaging , Data protection
- Description: Instant Messaging (IM) has grown tremendously over the last few years. Even though IM was originally developed as a social chat system, it has found a place in many companies, where it is being used as an essential business tool. However, many businesses rely on free IM and have not implemented a secure corporate IM solution. Most free IM clients were never intended for use in the workplace and, therefore, lack strong security features and administrative control. Consequently, free IM clients can provide attackers with an entry point for malicious code in an organization’s network that can ultimately lead to a company’s information assets being compromised. Therefore, even though free IM allows for better collaboration in the workplace, it comes at a cost, as the title of this dissertation suggests. This dissertation sets out to answer the question of how free IM can facilitate an attack on a company’s information assets. To answer the research question, the dissertation defines an IM attack model that models the ways in which an information system can be attacked when free IM is used within an organization. The IM attack model was created by categorising IM threats using the STRIDE threat classification scheme. The attacks that realize the categorised threats were then modelled using attack trees as the chosen attack modelling tool. Attack trees were chosen because of their ability to model the sequence of attacker actions during an attack. The author defined an enhanced graphical notation that was adopted for the attack trees used to create the IM attack model. The enhanced attack tree notation extends traditional attack trees to allow nodes in the trees to be of different classes and, therefore, allows attack trees to convey more information. During the process of defining the IM attack model, a number of experiments were conducted where IM vulnerabilities were exploited. Thereafter, a case study was constructed to document a simulated attack on an information system that involves the exploitation of IM vulnerabilities. The case study demonstrates how an attacker’s attack path relates to the IM attack model in a practical scenario. The IM attack model provides insight into how IM can facilitate an attack on a company’s information assets. The creation of the attack model for free IM lead to several realizations. The IM attack model revealed that even though the use of free IM clients may seem harmless, such IM clients can facilitate an attack on a company’s information assets. Furthermore, certain IM vulnerabilities may not pose a great risk by themselves, but when combined with the exploitation of other vulnerabilities, a much greater threat can be realized. These realizations hold true to what French playwright Jean Anouilh once said: “What you get free costs too much”.
- Full Text:
- Date Issued: 2006
- Authors: Du Preez, Riekert
- Date: 2006
- Subjects: Computer security , Instant messaging , Data protection
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9797 , http://hdl.handle.net/10948/499 , http://hdl.handle.net/10948/d1011921 , Computer security , Instant messaging , Data protection
- Description: Instant Messaging (IM) has grown tremendously over the last few years. Even though IM was originally developed as a social chat system, it has found a place in many companies, where it is being used as an essential business tool. However, many businesses rely on free IM and have not implemented a secure corporate IM solution. Most free IM clients were never intended for use in the workplace and, therefore, lack strong security features and administrative control. Consequently, free IM clients can provide attackers with an entry point for malicious code in an organization’s network that can ultimately lead to a company’s information assets being compromised. Therefore, even though free IM allows for better collaboration in the workplace, it comes at a cost, as the title of this dissertation suggests. This dissertation sets out to answer the question of how free IM can facilitate an attack on a company’s information assets. To answer the research question, the dissertation defines an IM attack model that models the ways in which an information system can be attacked when free IM is used within an organization. The IM attack model was created by categorising IM threats using the STRIDE threat classification scheme. The attacks that realize the categorised threats were then modelled using attack trees as the chosen attack modelling tool. Attack trees were chosen because of their ability to model the sequence of attacker actions during an attack. The author defined an enhanced graphical notation that was adopted for the attack trees used to create the IM attack model. The enhanced attack tree notation extends traditional attack trees to allow nodes in the trees to be of different classes and, therefore, allows attack trees to convey more information. During the process of defining the IM attack model, a number of experiments were conducted where IM vulnerabilities were exploited. Thereafter, a case study was constructed to document a simulated attack on an information system that involves the exploitation of IM vulnerabilities. The case study demonstrates how an attacker’s attack path relates to the IM attack model in a practical scenario. The IM attack model provides insight into how IM can facilitate an attack on a company’s information assets. The creation of the attack model for free IM lead to several realizations. The IM attack model revealed that even though the use of free IM clients may seem harmless, such IM clients can facilitate an attack on a company’s information assets. Furthermore, certain IM vulnerabilities may not pose a great risk by themselves, but when combined with the exploitation of other vulnerabilities, a much greater threat can be realized. These realizations hold true to what French playwright Jean Anouilh once said: “What you get free costs too much”.
- Full Text:
- Date Issued: 2006
Distributed authentication for resource control
- Authors: Burdis, Keith Robert
- Date: 2000
- Subjects: Computers -- Access control , Data protection , Computer networks -- Security measures , Electronic data processing departments -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4630 , http://hdl.handle.net/10962/d1006512 , Computers -- Access control , Data protection , Computer networks -- Security measures , Electronic data processing departments -- Security measures
- Description: This thesis examines distributed authentication in the process of controlling computing resources. We investigate user sign-on and two of the main authentication technologies that can be used to control a resource through authentication and providing additional security services. The problems with the existing sign-on scenario are that users have too much credential information to manage and are prompted for this information too often. Single Sign-On (SSO) is a viable solution to this problem if physical procedures are introduced to minimise the risks associated with its use. The Generic Security Services API (GSS-API) provides security services in a manner in- dependent of the environment in which these security services are used, encapsulating security functionality and insulating users from changes in security technology. The un- derlying security functionality is provided by GSS-API mechanisms. We developed the Secure Remote Password GSS-API Mechanism (SRPGM) to provide a mechanism that has low infrastructure requirements, is password-based and does not require the use of long-term asymmetric keys. We provide implementations of the Java GSS-API bindings and the LIPKEY and SRPGM GSS-API mechanisms. The Secure Authentication and Security Layer (SASL) provides security to connection- based Internet protocols. After finding deficiencies in existing SASL mechanisms we de- veloped the Secure Remote Password SASL mechanism (SRP-SASL) that provides strong password-based authentication and countermeasures against known attacks, while still be- ing simple and easy to implement. We provide implementations of the Java SASL binding and several SASL mechanisms, including SRP-SASL.
- Full Text:
- Date Issued: 2000
- Authors: Burdis, Keith Robert
- Date: 2000
- Subjects: Computers -- Access control , Data protection , Computer networks -- Security measures , Electronic data processing departments -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4630 , http://hdl.handle.net/10962/d1006512 , Computers -- Access control , Data protection , Computer networks -- Security measures , Electronic data processing departments -- Security measures
- Description: This thesis examines distributed authentication in the process of controlling computing resources. We investigate user sign-on and two of the main authentication technologies that can be used to control a resource through authentication and providing additional security services. The problems with the existing sign-on scenario are that users have too much credential information to manage and are prompted for this information too often. Single Sign-On (SSO) is a viable solution to this problem if physical procedures are introduced to minimise the risks associated with its use. The Generic Security Services API (GSS-API) provides security services in a manner in- dependent of the environment in which these security services are used, encapsulating security functionality and insulating users from changes in security technology. The un- derlying security functionality is provided by GSS-API mechanisms. We developed the Secure Remote Password GSS-API Mechanism (SRPGM) to provide a mechanism that has low infrastructure requirements, is password-based and does not require the use of long-term asymmetric keys. We provide implementations of the Java GSS-API bindings and the LIPKEY and SRPGM GSS-API mechanisms. The Secure Authentication and Security Layer (SASL) provides security to connection- based Internet protocols. After finding deficiencies in existing SASL mechanisms we de- veloped the Secure Remote Password SASL mechanism (SRP-SASL) that provides strong password-based authentication and countermeasures against known attacks, while still be- ing simple and easy to implement. We provide implementations of the Java SASL binding and several SASL mechanisms, including SRP-SASL.
- Full Text:
- Date Issued: 2000
An investigation of issues of privacy, anonymity and multi-factor authentication in an open environment
- Authors: Miles, Shaun Graeme
- Date: 2012-06-20
- Subjects: Electronic data processing departments -- Security measures , Electronic data processing departments , Privacy, Right of , Computer security , Data protection , Computers -- Access control
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4656 , http://hdl.handle.net/10962/d1006653 , Electronic data processing departments -- Security measures , Electronic data processing departments , Privacy, Right of , Computer security , Data protection , Computers -- Access control
- Description: This thesis performs an investigation into issues concerning the broad area ofIdentity and Access Management, with a focus on open environments. Through literature research the issues of privacy, anonymity and access control are identified. The issue of privacy is an inherent problem due to the nature of the digital network environment. Information can be duplicated and modified regardless of the wishes and intentions ofthe owner of that information unless proper measures are taken to secure the environment. Once information is published or divulged on the network, there is very little way of controlling the subsequent usage of that information. To address this issue a model for privacy is presented that follows the user centric paradigm of meta-identity. The lack of anonymity, where security measures can be thwarted through the observation of the environment, is a concern for users and systems. By an attacker observing the communication channel and monitoring the interactions between users and systems over a long enough period of time, it is possible to infer knowledge about the users and systems. This knowledge is used to build an identity profile of potential victims to be used in subsequent attacks. To address the problem, mechanisms for providing an acceptable level of anonymity while maintaining adequate accountability (from a legal standpoint) are explored. In terms of access control, the inherent weakness of single factor authentication mechanisms is discussed. The typical mechanism is the user-name and password pair, which provides a single point of failure. By increasing the factors used in authentication, the amount of work required to compromise the system increases non-linearly. Within an open network, several aspects hinder wide scale adoption and use of multi-factor authentication schemes, such as token management and the impact on usability. The framework is developed from a Utopian point of view, with the aim of being applicable to many situations as opposed to a single specific domain. The framework incorporates multi-factor authentication over multiple paths using mobile phones and GSM networks, and explores the usefulness of such an approach. The models are in tum analysed, providing a discussion into the assumptions made and the problems faced by each model. , Adobe Acrobat Pro 9.5.1 , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Authors: Miles, Shaun Graeme
- Date: 2012-06-20
- Subjects: Electronic data processing departments -- Security measures , Electronic data processing departments , Privacy, Right of , Computer security , Data protection , Computers -- Access control
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4656 , http://hdl.handle.net/10962/d1006653 , Electronic data processing departments -- Security measures , Electronic data processing departments , Privacy, Right of , Computer security , Data protection , Computers -- Access control
- Description: This thesis performs an investigation into issues concerning the broad area ofIdentity and Access Management, with a focus on open environments. Through literature research the issues of privacy, anonymity and access control are identified. The issue of privacy is an inherent problem due to the nature of the digital network environment. Information can be duplicated and modified regardless of the wishes and intentions ofthe owner of that information unless proper measures are taken to secure the environment. Once information is published or divulged on the network, there is very little way of controlling the subsequent usage of that information. To address this issue a model for privacy is presented that follows the user centric paradigm of meta-identity. The lack of anonymity, where security measures can be thwarted through the observation of the environment, is a concern for users and systems. By an attacker observing the communication channel and monitoring the interactions between users and systems over a long enough period of time, it is possible to infer knowledge about the users and systems. This knowledge is used to build an identity profile of potential victims to be used in subsequent attacks. To address the problem, mechanisms for providing an acceptable level of anonymity while maintaining adequate accountability (from a legal standpoint) are explored. In terms of access control, the inherent weakness of single factor authentication mechanisms is discussed. The typical mechanism is the user-name and password pair, which provides a single point of failure. By increasing the factors used in authentication, the amount of work required to compromise the system increases non-linearly. Within an open network, several aspects hinder wide scale adoption and use of multi-factor authentication schemes, such as token management and the impact on usability. The framework is developed from a Utopian point of view, with the aim of being applicable to many situations as opposed to a single specific domain. The framework incorporates multi-factor authentication over multiple paths using mobile phones and GSM networks, and explores the usefulness of such an approach. The models are in tum analysed, providing a discussion into the assumptions made and the problems faced by each model. , Adobe Acrobat Pro 9.5.1 , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
A comparative study of CERBER, MAKTUB and LOCKY Ransomware using a Hybridised-Malware analysis
- Authors: Schmitt, Veronica
- Date: 2019
- Subjects: Microsoft Windows (Computer file) , Data protection , Computer crimes -- Prevention , Computer security , Computer networks -- Security measures , Computers -- Access control , Malware (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92313 , vital:30702
- Description: There has been a significant increase in the prevalence of Ransomware attacks in the preceding four years to date. This indicates that the battle has not yet been won defending against this class of malware. This research proposes that by identifying the similarities within the operational framework of Ransomware strains, a better overall understanding of their operation and function can be achieved. This, in turn, will aid in a quicker response to future attacks. With the average Ransomware attack taking two hours to be identified, it shows that there is not yet a clear understanding as to why these attacks are so successful. Research into Ransomware is limited by what is currently known on the topic. Due to the limitations of the research the decision was taken to only examined three samples of Ransomware from different families. This was decided due to the complexities and comprehensive nature of the research. The in depth nature of the research and the time constraints associated with it did not allow for proof of concept of this framework to be tested on more than three families, but the exploratory work was promising and should be further explored in future research. The aim of the research is to follow the Hybrid-Malware analysis framework which consists of both static and the dynamic analysis phases, in addition to the digital forensic examination of the infected system. This allows for signature-based findings, along with behavioural and forensic findings all in one. This information allows for a better understanding of how this malware is designed and how it infects and remains persistent on a system. The operating system which has been chosen is the Microsoft Window 7 operating system which is still utilised by a significant proportion of Windows users especially in the corporate environment. The experiment process was designed to enable the researcher the ability to collect information regarding the Ransomware and every aspect of its behaviour and communication on a target system. The results can be compared across the three strains to identify the commonalities. The initial hypothesis was that Ransomware variants are all much like an instant cake box consists of specific building blocks which remain the same with the flavouring of the cake mix being the unique feature.
- Full Text:
- Date Issued: 2019
- Authors: Schmitt, Veronica
- Date: 2019
- Subjects: Microsoft Windows (Computer file) , Data protection , Computer crimes -- Prevention , Computer security , Computer networks -- Security measures , Computers -- Access control , Malware (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92313 , vital:30702
- Description: There has been a significant increase in the prevalence of Ransomware attacks in the preceding four years to date. This indicates that the battle has not yet been won defending against this class of malware. This research proposes that by identifying the similarities within the operational framework of Ransomware strains, a better overall understanding of their operation and function can be achieved. This, in turn, will aid in a quicker response to future attacks. With the average Ransomware attack taking two hours to be identified, it shows that there is not yet a clear understanding as to why these attacks are so successful. Research into Ransomware is limited by what is currently known on the topic. Due to the limitations of the research the decision was taken to only examined three samples of Ransomware from different families. This was decided due to the complexities and comprehensive nature of the research. The in depth nature of the research and the time constraints associated with it did not allow for proof of concept of this framework to be tested on more than three families, but the exploratory work was promising and should be further explored in future research. The aim of the research is to follow the Hybrid-Malware analysis framework which consists of both static and the dynamic analysis phases, in addition to the digital forensic examination of the infected system. This allows for signature-based findings, along with behavioural and forensic findings all in one. This information allows for a better understanding of how this malware is designed and how it infects and remains persistent on a system. The operating system which has been chosen is the Microsoft Window 7 operating system which is still utilised by a significant proportion of Windows users especially in the corporate environment. The experiment process was designed to enable the researcher the ability to collect information regarding the Ransomware and every aspect of its behaviour and communication on a target system. The results can be compared across the three strains to identify the commonalities. The initial hypothesis was that Ransomware variants are all much like an instant cake box consists of specific building blocks which remain the same with the flavouring of the cake mix being the unique feature.
- Full Text:
- Date Issued: 2019
A model for information security control audit for small to mid-sized organisations
- Authors: Deysel, Natasha
- Date: 2009
- Subjects: Data protection , Computer networks -- Information technology
- Language: English
- Type: Thesis , Masters , MA
- Identifier: vital:9760 , http://hdl.handle.net/10948/940 , Data protection , Computer networks -- Information technology
- Description: Organisations are increasingly dependent on their information. Compromise to this information in terms of loss, inaccuracy or competitors gaining unauthorised access could have devastating consequences for the organisation. Therefore, information security governance has become a major concern for all organisations, large and small. Information security governance is based on a set of policies and internal controls by which organisations direct and manage their information security. An effective information security governance programme should be based on a recognised framework, such as the Control Objectives for Information and related Technology (COBIT). COBIT focuses on what control objectives must be achieved in order to effectively manage the information technology environment. It has become very clear that if a company is serious about information security governance, it needs to apply the COBIT framework that deals with information security. The problem in some medium-sized organisations is that they do not realise the importance of information security governance and are either unaware of the risks or choose to ignore these risks as they do not have the expertise or resources available to provide them with assurance that they have the right information security controls in place to protect their organisation against threats.
- Full Text:
- Date Issued: 2009
- Authors: Deysel, Natasha
- Date: 2009
- Subjects: Data protection , Computer networks -- Information technology
- Language: English
- Type: Thesis , Masters , MA
- Identifier: vital:9760 , http://hdl.handle.net/10948/940 , Data protection , Computer networks -- Information technology
- Description: Organisations are increasingly dependent on their information. Compromise to this information in terms of loss, inaccuracy or competitors gaining unauthorised access could have devastating consequences for the organisation. Therefore, information security governance has become a major concern for all organisations, large and small. Information security governance is based on a set of policies and internal controls by which organisations direct and manage their information security. An effective information security governance programme should be based on a recognised framework, such as the Control Objectives for Information and related Technology (COBIT). COBIT focuses on what control objectives must be achieved in order to effectively manage the information technology environment. It has become very clear that if a company is serious about information security governance, it needs to apply the COBIT framework that deals with information security. The problem in some medium-sized organisations is that they do not realise the importance of information security governance and are either unaware of the risks or choose to ignore these risks as they do not have the expertise or resources available to provide them with assurance that they have the right information security controls in place to protect their organisation against threats.
- Full Text:
- Date Issued: 2009
Enabling e-learning 2.0 in information security education: a semantic web approach
- Authors: Goss, Ryan Gavin
- Date: 2009
- Subjects: Data protection , Computers -- Access control , Electronic data processing -- Security measures , Electronic data processing departments -- Security measures
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9771 , http://hdl.handle.net/10948/909 , Data protection , Computers -- Access control , Electronic data processing -- Security measures , Electronic data processing departments -- Security measures
- Description: The motivation for this study argued that current information security ed- ucation systems are inadequate for educating all users of computer systems world wide in acting securely during their operations with information sys- tems. There is, therefore, a pervasive need for information security knowledge in all aspects of modern life. E-Learning 2.0 could possi- bly contribute to solving this problem, however, little or no knowledge currently exists regarding the suitability and practicality of using such systems to infer information security knowledge to learners.
- Full Text:
- Date Issued: 2009
- Authors: Goss, Ryan Gavin
- Date: 2009
- Subjects: Data protection , Computers -- Access control , Electronic data processing -- Security measures , Electronic data processing departments -- Security measures
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9771 , http://hdl.handle.net/10948/909 , Data protection , Computers -- Access control , Electronic data processing -- Security measures , Electronic data processing departments -- Security measures
- Description: The motivation for this study argued that current information security ed- ucation systems are inadequate for educating all users of computer systems world wide in acting securely during their operations with information sys- tems. There is, therefore, a pervasive need for information security knowledge in all aspects of modern life. E-Learning 2.0 could possi- bly contribute to solving this problem, however, little or no knowledge currently exists regarding the suitability and practicality of using such systems to infer information security knowledge to learners.
- Full Text:
- Date Issued: 2009
Applying a framework for IT governance in South African higher education institutions
- Authors: Viljoen, Stephen
- Date: 2005
- Subjects: Computer security , Universities and colleges -- Computer networks -- Security measures -- South Africa , Data protection
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9739 , http://hdl.handle.net/10948/416 , Computer security , Universities and colleges -- Computer networks -- Security measures -- South Africa , Data protection
- Description: Background: Higher Education (HE), through HE Institutions, plays a very important role in society. There is thus a need for this sector to be well managed, especially with regards to planning, organising, and controlling. Corporate Governance has received a lot of attention in recent times, especially to engender trust on the part of the stakeholders. There are many similarities, but also significant differences in the governance of HE institutions and public companies. Information Technology (IT) plays an extremely important role in the modern organisation, creating huge opportunities, but also increasing the risk to the organisation. Therefore, effective governance of IT in HE Institutions is of great importance.
- Full Text:
- Date Issued: 2005
- Authors: Viljoen, Stephen
- Date: 2005
- Subjects: Computer security , Universities and colleges -- Computer networks -- Security measures -- South Africa , Data protection
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9739 , http://hdl.handle.net/10948/416 , Computer security , Universities and colleges -- Computer networks -- Security measures -- South Africa , Data protection
- Description: Background: Higher Education (HE), through HE Institutions, plays a very important role in society. There is thus a need for this sector to be well managed, especially with regards to planning, organising, and controlling. Corporate Governance has received a lot of attention in recent times, especially to engender trust on the part of the stakeholders. There are many similarities, but also significant differences in the governance of HE institutions and public companies. Information Technology (IT) plays an extremely important role in the modern organisation, creating huge opportunities, but also increasing the risk to the organisation. Therefore, effective governance of IT in HE Institutions is of great importance.
- Full Text:
- Date Issued: 2005