A baseline study of potentially malicious activity across five network telescopes
- Authors: Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429714 , vital:72634 , https://ieeexplore.ieee.org/abstract/document/6568378
- Description: +This paper explores the Internet Background Radiation (IBR) observed across five distinct network telescopes over a 15 month period. These network telescopes consisting of a /24 netblock each and are deployed in IP space administered by TENET, the tertiary education network in South Africa covering three numerically distant /8 network blocks. The differences and similarities in the observed network traffic are explored. Two anecdotal case studies are presented relating to the MS08-067 and MS12-020 vulnerabilities in the Microsoft Windows platforms. The first of these is related to the Conficker worm outbreak in 2008, and traffic targeting 445/tcp remains one of the top constituents of IBR as observed on the telescopes. The case of MS12-020 is of interest, as a long period of scanning activity targeting 3389/tcp, used by the Microsoft RDP service, was observed, with a significant drop on activity relating to the release of the security advisory and patch. Other areas of interest are highlighted, particularly where correlation in scanning activity was observed across the sensors. The paper concludes with some discussion on the application of network telescopes as part of a cyber-defence solution.
- Full Text:
- Date Issued: 2013
- Authors: Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429714 , vital:72634 , https://ieeexplore.ieee.org/abstract/document/6568378
- Description: +This paper explores the Internet Background Radiation (IBR) observed across five distinct network telescopes over a 15 month period. These network telescopes consisting of a /24 netblock each and are deployed in IP space administered by TENET, the tertiary education network in South Africa covering three numerically distant /8 network blocks. The differences and similarities in the observed network traffic are explored. Two anecdotal case studies are presented relating to the MS08-067 and MS12-020 vulnerabilities in the Microsoft Windows platforms. The first of these is related to the Conficker worm outbreak in 2008, and traffic targeting 445/tcp remains one of the top constituents of IBR as observed on the telescopes. The case of MS12-020 is of interest, as a long period of scanning activity targeting 3389/tcp, used by the Microsoft RDP service, was observed, with a significant drop on activity relating to the release of the security advisory and patch. Other areas of interest are highlighted, particularly where correlation in scanning activity was observed across the sensors. The paper concludes with some discussion on the application of network telescopes as part of a cyber-defence solution.
- Full Text:
- Date Issued: 2013
A high-level architecture for efficient packet trace analysis on gpu co-processors
- Nottingham, Alastair, Irwin, Barry V W
- Authors: Nottingham, Alastair , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429572 , vital:72623 , 10.1109/ISSA.2013.6641052
- Description: This paper proposes a high-level architecture to support efficient, massively parallel packet classification, filtering and analysis using commodity Graphics Processing Unit (GPU) hardware. The proposed architecture aims to provide a flexible and efficient parallel packet processing and analysis framework, supporting complex programmable filtering, data mining operations, statistical analysis functions and traffic visualisation, with minimal CPU overhead. In particular, this framework aims to provide a robust set of high-speed analysis functionality, in order to dramatically reduce the time required to process and analyse extremely large network traces. This architecture derives from initial research, which has shown GPU co-processors to be effective in accelerating packet classification to up to tera-bit speeds with minimal CPU overhead, far exceeding the bandwidth capacity between standard long term storage and the GPU device. This paper provides a high-level overview of the proposed architecture and its primary components, motivated by the results of prior research in the field.
- Full Text:
- Date Issued: 2013
- Authors: Nottingham, Alastair , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429572 , vital:72623 , 10.1109/ISSA.2013.6641052
- Description: This paper proposes a high-level architecture to support efficient, massively parallel packet classification, filtering and analysis using commodity Graphics Processing Unit (GPU) hardware. The proposed architecture aims to provide a flexible and efficient parallel packet processing and analysis framework, supporting complex programmable filtering, data mining operations, statistical analysis functions and traffic visualisation, with minimal CPU overhead. In particular, this framework aims to provide a robust set of high-speed analysis functionality, in order to dramatically reduce the time required to process and analyse extremely large network traces. This architecture derives from initial research, which has shown GPU co-processors to be effective in accelerating packet classification to up to tera-bit speeds with minimal CPU overhead, far exceeding the bandwidth capacity between standard long term storage and the GPU device. This paper provides a high-level overview of the proposed architecture and its primary components, motivated by the results of prior research in the field.
- Full Text:
- Date Issued: 2013
A kernel-driven framework for high performance internet routing simulation
- Herbert, Alan, Irwin, Barry V W
- Authors: Herbert, Alan , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429585 , vital:72624 , 10.1109/ISSA.2013.6641048
- Description: The ability to provide the simulation of packets traversing an internet path is an integral part of providing realistic simulations for network training, and cyber defence exercises. This paper builds on previous work, and considers an in-kernel approach to solving the routing simulation problem. The inkernel approach is anticipated to allow the framework to be able to achieve throughput rates of 1GB/s or higher using commodity hardware. Processes that run outside the context of the kernel of most operating system require context switching to access hardware and kernel modules. This leads to considerable delays in the processes, such as network simulators, that frequently access hardware such as hard disk accesses and network packet handling. To mitigate this problem, as experienced with earlier implementations, this research looks towards implementing a kernel module to handle network routing and simulation within a UNIX based system. This would remove delays incurred from context switching and allows for direct access to the hardware components of the host.
- Full Text:
- Date Issued: 2013
- Authors: Herbert, Alan , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429585 , vital:72624 , 10.1109/ISSA.2013.6641048
- Description: The ability to provide the simulation of packets traversing an internet path is an integral part of providing realistic simulations for network training, and cyber defence exercises. This paper builds on previous work, and considers an in-kernel approach to solving the routing simulation problem. The inkernel approach is anticipated to allow the framework to be able to achieve throughput rates of 1GB/s or higher using commodity hardware. Processes that run outside the context of the kernel of most operating system require context switching to access hardware and kernel modules. This leads to considerable delays in the processes, such as network simulators, that frequently access hardware such as hard disk accesses and network packet handling. To mitigate this problem, as experienced with earlier implementations, this research looks towards implementing a kernel module to handle network routing and simulation within a UNIX based system. This would remove delays incurred from context switching and allows for direct access to the hardware components of the host.
- Full Text:
- Date Issued: 2013
A source analysis of the conficker outbreak from a network telescope.
- Authors: Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429742 , vital:72636 , 10.23919/SAIEE.2013.8531865
- Description: This paper discusses a dataset of some 16 million packets targeting port 445/tcp collected by a network telescope utilising a /24 netblock in South African IP address space. An initial overview of the collected data is provided. This is followed by a detailed analysis of the packet characteristics observed, including size and TTL. The peculiarities of the observed target selection and the results of the flaw in the Conficker worm's propagation algorithm are presented. An analysis of the 4 million observed source hosts is reported, grouped by both packet counts and the number of distinct hosts per network address block. Address blocks of size /8, 16 and 24 are used for groupings. The localisation, by geographic region and numerical proximity, of high ranking aggregate netblocks is highlighted. The observed shift in geopolitical origins observed during the evolution of the Conficker worm is also discussed. The paper concludes with some overall analyses, and consideration of the application of network telescopes to the monitoring of such outbreaks in the future.
- Full Text:
- Date Issued: 2013
- Authors: Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429742 , vital:72636 , 10.23919/SAIEE.2013.8531865
- Description: This paper discusses a dataset of some 16 million packets targeting port 445/tcp collected by a network telescope utilising a /24 netblock in South African IP address space. An initial overview of the collected data is provided. This is followed by a detailed analysis of the packet characteristics observed, including size and TTL. The peculiarities of the observed target selection and the results of the flaw in the Conficker worm's propagation algorithm are presented. An analysis of the 4 million observed source hosts is reported, grouped by both packet counts and the number of distinct hosts per network address block. Address blocks of size /8, 16 and 24 are used for groupings. The localisation, by geographic region and numerical proximity, of high ranking aggregate netblocks is highlighted. The observed shift in geopolitical origins observed during the evolution of the Conficker worm is also discussed. The paper concludes with some overall analyses, and consideration of the application of network telescopes to the monitoring of such outbreaks in the future.
- Full Text:
- Date Issued: 2013
Automated classification of computer network attacks
- van Heerden, Renier, Leenen, Louise, Irwin, Barry V W
- Authors: van Heerden, Renier , Leenen, Louise , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429622 , vital:72627 , 10.1109/ICASTech.2013.6707510
- Description: In this paper we demonstrate how an automated reasoner, HermiT, is used to classify instances of computer network based attacks in conjunction with a network attack ontology. The ontology describes different types of network attacks through classes and inter-class relationships and has previously been implemented in the Protege ontology editor. Two significant recent instances of network based attacks are presented as individuals in the ontology and correctly classified by the automated reasoner according to the relevant types of attack scenarios depicted in the ontology. The two network attack instances are the Distributed Denial of Service attack on SpamHaus in 2013 and the theft of 42 million Rand ($6.7 million) from South African Postbank in 2012.
- Full Text:
- Date Issued: 2013
- Authors: van Heerden, Renier , Leenen, Louise , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429622 , vital:72627 , 10.1109/ICASTech.2013.6707510
- Description: In this paper we demonstrate how an automated reasoner, HermiT, is used to classify instances of computer network based attacks in conjunction with a network attack ontology. The ontology describes different types of network attacks through classes and inter-class relationships and has previously been implemented in the Protege ontology editor. Two significant recent instances of network based attacks are presented as individuals in the ontology and correctly classified by the automated reasoner according to the relevant types of attack scenarios depicted in the ontology. The two network attack instances are the Distributed Denial of Service attack on SpamHaus in 2013 and the theft of 42 million Rand ($6.7 million) from South African Postbank in 2012.
- Full Text:
- Date Issued: 2013
Classification of security operation centers
- Jacobs, Pierre, Arnab, Alapan, Irwin, Barry V W
- Authors: Jacobs, Pierre , Arnab, Alapan , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429635 , vital:72628 , 10.1109/ISSA.2013.6641054
- Description: Security Operation Centers (SOCs) are a necessary service for organisations that want to address compliance and threat management. While there are frameworks in existence that addresses the technology aspects of these services, a holistic framework addressing processes, staffing and technology currently do not exist. Additionally, it would be useful for organizations and constituents considering building, buying or selling these services to measure the effectiveness and maturity of the provided services. In this paper, we propose a classification and rating scheme for SOC services, evaluating both the capabilities and the maturity of the services offered.
- Full Text:
- Date Issued: 2013
- Authors: Jacobs, Pierre , Arnab, Alapan , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429635 , vital:72628 , 10.1109/ISSA.2013.6641054
- Description: Security Operation Centers (SOCs) are a necessary service for organisations that want to address compliance and threat management. While there are frameworks in existence that addresses the technology aspects of these services, a holistic framework addressing processes, staffing and technology currently do not exist. Additionally, it would be useful for organizations and constituents considering building, buying or selling these services to measure the effectiveness and maturity of the provided services. In this paper, we propose a classification and rating scheme for SOC services, evaluating both the capabilities and the maturity of the services offered.
- Full Text:
- Date Issued: 2013
Classification of security operation centers
- Jacobs, Pierre, Arnab, Alapan, Irwin, Barry V W
- Authors: Jacobs, Pierre , Arnab, Alapan , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429785 , vital:72639 , 10.1109/ISSA.2013.6641054
- Description: Security Operation Centers (SOCs) are a necessary service for organisations that want to address compliance and threat management. While there are frameworks in existence that addresses the technology aspects of these services, a holistic framework addressing processes, staffing and technology currently do not exist. Additionally, it would be useful for organizations and constituents considering building, buying or selling these services to measure the effectiveness and maturity of the provided services. In this paper, we propose a classification and rating scheme for SOC services, evaluating both the capabilities and the maturity of the services offered.
- Full Text:
- Date Issued: 2013
- Authors: Jacobs, Pierre , Arnab, Alapan , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429785 , vital:72639 , 10.1109/ISSA.2013.6641054
- Description: Security Operation Centers (SOCs) are a necessary service for organisations that want to address compliance and threat management. While there are frameworks in existence that addresses the technology aspects of these services, a holistic framework addressing processes, staffing and technology currently do not exist. Additionally, it would be useful for organizations and constituents considering building, buying or selling these services to measure the effectiveness and maturity of the provided services. In this paper, we propose a classification and rating scheme for SOC services, evaluating both the capabilities and the maturity of the services offered.
- Full Text:
- Date Issued: 2013
Deep Routing Simulation
- Irwin, Barry V W, Herbert, Alan
- Authors: Irwin, Barry V W , Herbert, Alan
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430353 , vital:72685 , https://www.academic-bookshop.com/ourshop/prod_2546879-ICIW-2013-8th-International-Conference-on-Information-Warfare-and-Security.html
- Description: This paper discusses a dataset of some 16 million packets targeting port 445/tcp collected by a network telescope utilising a /24 netblock in South African IP address space. An initial overview of the collected data is provided. This is followed by a detailed analysis of the packet characteristics observed, including size and TTL. The peculiarities of the observed target selection and the results of the flaw in the Conficker worm's propagation algorithm are presented. An analysis of the 4 million observed source hosts is reported, grouped by both packet counts and the number of distinct hosts per network address block. Address blocks of size /8, 16 and 24 are used for groupings. The localisation, by geographic region and numerical proximity, of high ranking aggregate netblocks is highlighted. The observed shift in geopolitical origins observed during the evolution of the Conficker worm is also discussed. The paper concludes with some overall analyses, and consideration of the application of network telescopes to the monitoring of such outbreaks in the future.
- Full Text:
- Date Issued: 2013
- Authors: Irwin, Barry V W , Herbert, Alan
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430353 , vital:72685 , https://www.academic-bookshop.com/ourshop/prod_2546879-ICIW-2013-8th-International-Conference-on-Information-Warfare-and-Security.html
- Description: This paper discusses a dataset of some 16 million packets targeting port 445/tcp collected by a network telescope utilising a /24 netblock in South African IP address space. An initial overview of the collected data is provided. This is followed by a detailed analysis of the packet characteristics observed, including size and TTL. The peculiarities of the observed target selection and the results of the flaw in the Conficker worm's propagation algorithm are presented. An analysis of the 4 million observed source hosts is reported, grouped by both packet counts and the number of distinct hosts per network address block. Address blocks of size /8, 16 and 24 are used for groupings. The localisation, by geographic region and numerical proximity, of high ranking aggregate netblocks is highlighted. The observed shift in geopolitical origins observed during the evolution of the Conficker worm is also discussed. The paper concludes with some overall analyses, and consideration of the application of network telescopes to the monitoring of such outbreaks in the future.
- Full Text:
- Date Issued: 2013
Developing a virtualised testbed environment in preparation for testing of network based attacks
- Van Heerden, Renier, Pieterse, Heloise, Burke, Ivan, Irwin, Barry V W
- Authors: Van Heerden, Renier , Pieterse, Heloise , Burke, Ivan , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429648 , vital:72629 , 10.1109/ICASTech.2013.6707509
- Description: Computer network attacks are difficult to simulate due to the damage they may cause to live networks and the complexity required simulating a useful network. We constructed a virtualised network within a vSphereESXi environment which is able to simulate: thirty workstations, ten servers, three distinct network segments and the accompanying network traffic. The VSphere environment provided added benefits, such as the ability to pause, restart and snapshot virtual computers. These abilities enabled the authors to reset the simulation environment before each test and mitigated against the damage that an attack potentially inflicts on the test network. Without simulated network traffic, the virtualised network was too sterile. This resulted in any network event being a simple task to detect, making network traffic simulation a requirement for an event detection test bed. Five main kinds of traffic were simulated: Web browsing, File transfer, e-mail, version control and Intranet File traffic. The simulated traffic volumes were pseudo randomised to represent differing temporal patterns. By building a virtualised network with simulated traffic we were able to test IDS' and other network attack detection sensors in a much more realistic environment before moving it to a live network. The goal of this paper is to present a virtualised testbedenvironmentin which network attacks can safely be tested.
- Full Text:
- Date Issued: 2013
- Authors: Van Heerden, Renier , Pieterse, Heloise , Burke, Ivan , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429648 , vital:72629 , 10.1109/ICASTech.2013.6707509
- Description: Computer network attacks are difficult to simulate due to the damage they may cause to live networks and the complexity required simulating a useful network. We constructed a virtualised network within a vSphereESXi environment which is able to simulate: thirty workstations, ten servers, three distinct network segments and the accompanying network traffic. The VSphere environment provided added benefits, such as the ability to pause, restart and snapshot virtual computers. These abilities enabled the authors to reset the simulation environment before each test and mitigated against the damage that an attack potentially inflicts on the test network. Without simulated network traffic, the virtualised network was too sterile. This resulted in any network event being a simple task to detect, making network traffic simulation a requirement for an event detection test bed. Five main kinds of traffic were simulated: Web browsing, File transfer, e-mail, version control and Intranet File traffic. The simulated traffic volumes were pseudo randomised to represent differing temporal patterns. By building a virtualised network with simulated traffic we were able to test IDS' and other network attack detection sensors in a much more realistic environment before moving it to a live network. The goal of this paper is to present a virtualised testbedenvironmentin which network attacks can safely be tested.
- Full Text:
- Date Issued: 2013
Real-time distributed malicious traffic monitoring for honeypots and network telescopes
- Hunter, Samuel O, Irwin, Barry V W, Stalmans, Etienne
- Authors: Hunter, Samuel O , Irwin, Barry V W , Stalmans, Etienne
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429660 , vital:72630 , 10.1109/ISSA.2013.6641050
- Description: Network telescopes and honeypots have been used with great success to record malicious network traffic for analysis, however, this is often done off-line well after the traffic was observed. This has left us with only a cursory understanding of malicious hosts and no knowledge of the software they run, uptime or other malicious activity they may have participated in. This work covers a messaging framework (rDSN) that was developed to allow for the real-time analysis of malicious traffic. This data was captured from multiple, distributed honeypots and network telescopes. Data was collected over a period of two months from these data sensors. Using this data new techniques for malicious host analysis and re-identification in dynamic IP address space were explored. An Automated Reconnaissance (AR) Framework was developed to aid the process of data collection, this framework was responsible for gathering information from malicious hosts through both passive and active fingerprinting techniques. From the analysis of this data; correlations between malicious hosts were identified based on characteristics such as Operating System, targeted service, location and services running on the malicious hosts. An initial investigation in Latency Based Multilateration (LBM), a novel technique to assist in host re-identification was tested and proved successful as a supporting metric for host re-identification.
- Full Text:
- Date Issued: 2013
- Authors: Hunter, Samuel O , Irwin, Barry V W , Stalmans, Etienne
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429660 , vital:72630 , 10.1109/ISSA.2013.6641050
- Description: Network telescopes and honeypots have been used with great success to record malicious network traffic for analysis, however, this is often done off-line well after the traffic was observed. This has left us with only a cursory understanding of malicious hosts and no knowledge of the software they run, uptime or other malicious activity they may have participated in. This work covers a messaging framework (rDSN) that was developed to allow for the real-time analysis of malicious traffic. This data was captured from multiple, distributed honeypots and network telescopes. Data was collected over a period of two months from these data sensors. Using this data new techniques for malicious host analysis and re-identification in dynamic IP address space were explored. An Automated Reconnaissance (AR) Framework was developed to aid the process of data collection, this framework was responsible for gathering information from malicious hosts through both passive and active fingerprinting techniques. From the analysis of this data; correlations between malicious hosts were identified based on characteristics such as Operating System, targeted service, location and services running on the malicious hosts. An initial investigation in Latency Based Multilateration (LBM), a novel technique to assist in host re-identification was tested and proved successful as a supporting metric for host re-identification.
- Full Text:
- Date Issued: 2013
Towards a GPU accelerated virtual machine for massively parallel packet classification and filtering
- Nottingham, Alastair, Irwin, Barry V W
- Authors: Nottingham, Alastair , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430295 , vital:72681 , https://doi.org/10.1145/2513456.2513504
- Description: This paper considers the application of GPU co-processors to accelerate the analysis of packet data, particularly within extremely large packet traces spanning months or years of traffic. Discussion focuses on the construction, performance and limitations of the experimental GPF (GPU Packet Filter), which employs a prototype massively-parallel protocol-independent multi-match algorithm to rapidly compare packets against multiple arbitrary filters. The paper concludes with a consideration of mechanisms to expand the flexibility and power of the GPF algorithm to construct a fully programmable GPU packet classification virtual machine, which can perform massively parallel classification, data-mining and data-transformation to explore and analyse packet traces. This virtual machine is a component of a larger framework of capture analysis tools which together provide capture indexing, manipulation, filtering and visualisation functions.
- Full Text:
- Date Issued: 2013
Towards a GPU accelerated virtual machine for massively parallel packet classification and filtering
- Authors: Nottingham, Alastair , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430295 , vital:72681 , https://doi.org/10.1145/2513456.2513504
- Description: This paper considers the application of GPU co-processors to accelerate the analysis of packet data, particularly within extremely large packet traces spanning months or years of traffic. Discussion focuses on the construction, performance and limitations of the experimental GPF (GPU Packet Filter), which employs a prototype massively-parallel protocol-independent multi-match algorithm to rapidly compare packets against multiple arbitrary filters. The paper concludes with a consideration of mechanisms to expand the flexibility and power of the GPF algorithm to construct a fully programmable GPU packet classification virtual machine, which can perform massively parallel classification, data-mining and data-transformation to explore and analyse packet traces. This virtual machine is a component of a larger framework of capture analysis tools which together provide capture indexing, manipulation, filtering and visualisation functions.
- Full Text:
- Date Issued: 2013
Visualization of a data leak
- Swart, Ignus, Grobler, Marthie, Irwin, Barry V W
- Authors: Swart, Ignus , Grobler, Marthie , Irwin, Barry V W
- Date: 2013
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428584 , vital:72522 , 10.1109/ISSA.2013.6641046
- Description: The potential impact that data leakage can have on a country, both on a national level as well as on an individual level, can be wide reaching and potentially catastrophic. In January 2013, several South African companies became the target of a hack attack, resulting in the breach of security measures and the leaking of a claimed 700000 records. The affected companies are spread across a number of domains, thus making the leak a very wide impact area. The aim of this paper is to analyze the data released from the South African breach and to visualize the extent of the loss by the companies affected. The value of this work lies in its connection to and interpretation of related South African legislation. The data extracted during the analysis is primarily personally identifiable information, such as defined by the Electronic Communications and Transactions Act of 2002 and the Protection of Personal Information Bill of 2009.
- Full Text:
- Date Issued: 2013
- Authors: Swart, Ignus , Grobler, Marthie , Irwin, Barry V W
- Date: 2013
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428584 , vital:72522 , 10.1109/ISSA.2013.6641046
- Description: The potential impact that data leakage can have on a country, both on a national level as well as on an individual level, can be wide reaching and potentially catastrophic. In January 2013, several South African companies became the target of a hack attack, resulting in the breach of security measures and the leaking of a claimed 700000 records. The affected companies are spread across a number of domains, thus making the leak a very wide impact area. The aim of this paper is to analyze the data released from the South African breach and to visualize the extent of the loss by the companies affected. The value of this work lies in its connection to and interpretation of related South African legislation. The data extracted during the analysis is primarily personally identifiable information, such as defined by the Electronic Communications and Transactions Act of 2002 and the Protection of Personal Information Bill of 2009.
- Full Text:
- Date Issued: 2013
- «
- ‹
- 1
- ›
- »