Extending the NFComms framework for bulk data transfers
- Pennefather, Sean, Bradshaw, Karen L, Irwin, Barry V W
- Authors: Pennefather, Sean , Bradshaw, Karen L , Irwin, Barry V W
- Date: 2018
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430152 , vital:72669 , https://doi.org/10.1145/3278681.3278686
- Description: In this paper we present the design and implementation of an indirect messaging extension for the existing NFComms framework that pro-vides communication between a network flow processor and host CPU. This extension addresses the bulk throughput limitations of the frame-work and is intended to work in conjunction with existing communication mediums. Testing of the framework extensions shows an increase in throughput performance of up to 300× that of the current direct mes-sage passing framework at the cost of increased single message laten-cy of up to 2×. This trade-off is considered acceptable as the proposed extensions are intended for bulk data transfer only while the existing message passing functionality of the framework is preserved and can be used in situations where low latency is required for small messages.
- Full Text:
- Date Issued: 2018
- Authors: Pennefather, Sean , Bradshaw, Karen L , Irwin, Barry V W
- Date: 2018
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430152 , vital:72669 , https://doi.org/10.1145/3278681.3278686
- Description: In this paper we present the design and implementation of an indirect messaging extension for the existing NFComms framework that pro-vides communication between a network flow processor and host CPU. This extension addresses the bulk throughput limitations of the frame-work and is intended to work in conjunction with existing communication mediums. Testing of the framework extensions shows an increase in throughput performance of up to 300× that of the current direct mes-sage passing framework at the cost of increased single message laten-cy of up to 2×. This trade-off is considered acceptable as the proposed extensions are intended for bulk data transfer only while the existing message passing functionality of the framework is preserved and can be used in situations where low latency is required for small messages.
- Full Text:
- Date Issued: 2018
Toward distributed key management for offline authentication
- Linklater, Gregory, Smith, Christian, Herbert, Alan, Irwin, Barry V W
- Authors: Linklater, Gregory , Smith, Christian , Herbert, Alan , Irwin, Barry V W
- Date: 2018
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430283 , vital:72680 , https://doi.org/10.1145/3278681.3278683
- Description: Self-sovereign identity promises prospective users greater control, security, privacy, portability and overall greater convenience; however the immaturity of current distributed key management solutions results in general disregard of security advisories in favour of convenience and accessibility. This research proposes the use of intermediate certificates as a distributed key management solution. Intermediate certificates will be shown to allow multiple keys to authenticate to a single self-sovereign identity. Keys may be freely added to an identity without requiring a distributed ledger, any other third-party service or sharing private keys between devices. This research will also show that key rotation is a superior alternative to existing key recovery and escrow systems in helping users recover when their keys are lost or compromised. These features will allow remote credentials to be used to issuer, present and appraise remote attestations, without relying on a constant Internet connection.
- Full Text:
- Date Issued: 2018
- Authors: Linklater, Gregory , Smith, Christian , Herbert, Alan , Irwin, Barry V W
- Date: 2018
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430283 , vital:72680 , https://doi.org/10.1145/3278681.3278683
- Description: Self-sovereign identity promises prospective users greater control, security, privacy, portability and overall greater convenience; however the immaturity of current distributed key management solutions results in general disregard of security advisories in favour of convenience and accessibility. This research proposes the use of intermediate certificates as a distributed key management solution. Intermediate certificates will be shown to allow multiple keys to authenticate to a single self-sovereign identity. Keys may be freely added to an identity without requiring a distributed ledger, any other third-party service or sharing private keys between devices. This research will also show that key rotation is a superior alternative to existing key recovery and escrow systems in helping users recover when their keys are lost or compromised. These features will allow remote credentials to be used to issuer, present and appraise remote attestations, without relying on a constant Internet connection.
- Full Text:
- Date Issued: 2018
Investigating the effects various compilers have on the electromagnetic signature of a cryptographic executable
- Frieslaar, Ibraheem, Irwin, Barry V W
- Authors: Frieslaar, Ibraheem , Irwin, Barry V W
- Date: 2017
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430207 , vital:72673 , https://doi.org/10.1145/3129416.3129436
- Description: This research investigates changes in the electromagnetic (EM) signatures of a cryptographic binary executable based on compile-time parameters to the GNU and clang compilers. The source code was compiled and executed on a Raspberry Pi 2, which utilizes the ARMv7 CPU. Various optimization flags are enabled at compile-time and the output of the binary executable's EM signatures are captured at run-time. It is demonstrated that GNU and clang compilers produced different EM signature on program execution. The results indicated while utilizing the O3 optimization flag, the EM signature of the program changes. Additionally, the g++ compiler demonstrated fewer instructions were required to run the executable; this related to fewer EM emissions leaked. The EM data from the various compilers under different optimization levels was used as input data for a correlation power analysis attack. The results indicated that partial AES-128 encryption keys was possible. In addition, the fewest subkeys recovered was when the clang compiler was used with level O2 optimization. Finally, the research was able to recover 15 of 16 AES-128 cryptographic algorithm's subkeys, from the the Pi.
- Full Text:
- Date Issued: 2017
- Authors: Frieslaar, Ibraheem , Irwin, Barry V W
- Date: 2017
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430207 , vital:72673 , https://doi.org/10.1145/3129416.3129436
- Description: This research investigates changes in the electromagnetic (EM) signatures of a cryptographic binary executable based on compile-time parameters to the GNU and clang compilers. The source code was compiled and executed on a Raspberry Pi 2, which utilizes the ARMv7 CPU. Various optimization flags are enabled at compile-time and the output of the binary executable's EM signatures are captured at run-time. It is demonstrated that GNU and clang compilers produced different EM signature on program execution. The results indicated while utilizing the O3 optimization flag, the EM signature of the program changes. Additionally, the g++ compiler demonstrated fewer instructions were required to run the executable; this related to fewer EM emissions leaked. The EM data from the various compilers under different optimization levels was used as input data for a correlation power analysis attack. The results indicated that partial AES-128 encryption keys was possible. In addition, the fewest subkeys recovered was when the clang compiler was used with level O2 optimization. Finally, the research was able to recover 15 of 16 AES-128 cryptographic algorithm's subkeys, from the the Pi.
- Full Text:
- Date Issued: 2017
Real-time distributed malicious traffic monitoring for honeypots and network telescopes
- Hunter, Samuel O, Irwin, Barry V W, Stalmans, Etienne
- Authors: Hunter, Samuel O , Irwin, Barry V W , Stalmans, Etienne
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429660 , vital:72630 , 10.1109/ISSA.2013.6641050
- Description: Network telescopes and honeypots have been used with great success to record malicious network traffic for analysis, however, this is often done off-line well after the traffic was observed. This has left us with only a cursory understanding of malicious hosts and no knowledge of the software they run, uptime or other malicious activity they may have participated in. This work covers a messaging framework (rDSN) that was developed to allow for the real-time analysis of malicious traffic. This data was captured from multiple, distributed honeypots and network telescopes. Data was collected over a period of two months from these data sensors. Using this data new techniques for malicious host analysis and re-identification in dynamic IP address space were explored. An Automated Reconnaissance (AR) Framework was developed to aid the process of data collection, this framework was responsible for gathering information from malicious hosts through both passive and active fingerprinting techniques. From the analysis of this data; correlations between malicious hosts were identified based on characteristics such as Operating System, targeted service, location and services running on the malicious hosts. An initial investigation in Latency Based Multilateration (LBM), a novel technique to assist in host re-identification was tested and proved successful as a supporting metric for host re-identification.
- Full Text:
- Date Issued: 2013
- Authors: Hunter, Samuel O , Irwin, Barry V W , Stalmans, Etienne
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429660 , vital:72630 , 10.1109/ISSA.2013.6641050
- Description: Network telescopes and honeypots have been used with great success to record malicious network traffic for analysis, however, this is often done off-line well after the traffic was observed. This has left us with only a cursory understanding of malicious hosts and no knowledge of the software they run, uptime or other malicious activity they may have participated in. This work covers a messaging framework (rDSN) that was developed to allow for the real-time analysis of malicious traffic. This data was captured from multiple, distributed honeypots and network telescopes. Data was collected over a period of two months from these data sensors. Using this data new techniques for malicious host analysis and re-identification in dynamic IP address space were explored. An Automated Reconnaissance (AR) Framework was developed to aid the process of data collection, this framework was responsible for gathering information from malicious hosts through both passive and active fingerprinting techniques. From the analysis of this data; correlations between malicious hosts were identified based on characteristics such as Operating System, targeted service, location and services running on the malicious hosts. An initial investigation in Latency Based Multilateration (LBM), a novel technique to assist in host re-identification was tested and proved successful as a supporting metric for host re-identification.
- Full Text:
- Date Issued: 2013
Towards a GPU accelerated virtual machine for massively parallel packet classification and filtering
- Nottingham, Alastair, Irwin, Barry V W
- Authors: Nottingham, Alastair , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430295 , vital:72681 , https://doi.org/10.1145/2513456.2513504
- Description: This paper considers the application of GPU co-processors to accelerate the analysis of packet data, particularly within extremely large packet traces spanning months or years of traffic. Discussion focuses on the construction, performance and limitations of the experimental GPF (GPU Packet Filter), which employs a prototype massively-parallel protocol-independent multi-match algorithm to rapidly compare packets against multiple arbitrary filters. The paper concludes with a consideration of mechanisms to expand the flexibility and power of the GPF algorithm to construct a fully programmable GPU packet classification virtual machine, which can perform massively parallel classification, data-mining and data-transformation to explore and analyse packet traces. This virtual machine is a component of a larger framework of capture analysis tools which together provide capture indexing, manipulation, filtering and visualisation functions.
- Full Text:
- Date Issued: 2013
Towards a GPU accelerated virtual machine for massively parallel packet classification and filtering
- Authors: Nottingham, Alastair , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430295 , vital:72681 , https://doi.org/10.1145/2513456.2513504
- Description: This paper considers the application of GPU co-processors to accelerate the analysis of packet data, particularly within extremely large packet traces spanning months or years of traffic. Discussion focuses on the construction, performance and limitations of the experimental GPF (GPU Packet Filter), which employs a prototype massively-parallel protocol-independent multi-match algorithm to rapidly compare packets against multiple arbitrary filters. The paper concludes with a consideration of mechanisms to expand the flexibility and power of the GPF algorithm to construct a fully programmable GPU packet classification virtual machine, which can perform massively parallel classification, data-mining and data-transformation to explore and analyse packet traces. This virtual machine is a component of a larger framework of capture analysis tools which together provide capture indexing, manipulation, filtering and visualisation functions.
- Full Text:
- Date Issued: 2013
Capturefoundry: a gpu accelerated packet capture analysis tool
- Nottingham, Alastair, Richter, John, Irwin, Barry V W
- Authors: Nottingham, Alastair , Richter, John , Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430112 , vital:72666 , https://doi.org/10.1145/2389836.2389877
- Description: Packet captures are used to support a variety of tasks, including network administration, fault diagnosis and security and network related research. Despite their usefulness, processing packet capture files is a slow and tedious process that impedes the analysis of large, long-term captures. This paper discusses the primary components and observed performance of CaptureFoundry, a stand-alone capture analysis support tool designed to quickly map, filter and extract packets from large capture files using a combination of indexing techniques and GPU accelerated packet classification. All results are persistent, and may be used to rapidly extract small pre-filtered captures on demand that may be analysed quickly in existing capture analysis applications. Performance results show that CaptureFoundry is capable of generating multiple indexes and classification results for large captures at hundreds of megabytes per second, with minimal CPU and memory overhead and only minor additional storage space requirements.
- Full Text:
- Date Issued: 2012
- Authors: Nottingham, Alastair , Richter, John , Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430112 , vital:72666 , https://doi.org/10.1145/2389836.2389877
- Description: Packet captures are used to support a variety of tasks, including network administration, fault diagnosis and security and network related research. Despite their usefulness, processing packet capture files is a slow and tedious process that impedes the analysis of large, long-term captures. This paper discusses the primary components and observed performance of CaptureFoundry, a stand-alone capture analysis support tool designed to quickly map, filter and extract packets from large capture files using a combination of indexing techniques and GPU accelerated packet classification. All results are persistent, and may be used to rapidly extract small pre-filtered captures on demand that may be analysed quickly in existing capture analysis applications. Performance results show that CaptureFoundry is capable of generating multiple indexes and classification results for large captures at hundreds of megabytes per second, with minimal CPU and memory overhead and only minor additional storage space requirements.
- Full Text:
- Date Issued: 2012
Parallel packet classification using GPU co-processors
- Nottingham, Alistair, Irwin, Barry V W
- Authors: Nottingham, Alistair , Irwin, Barry V W
- Date: 2010
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430250 , vital:72677 , https://doi.org/10.1145/1899503.1899529
- Description: In the domain of network security, packet filtering for classification pur-poses is of significant interest. Packet classification provides a mecha-nism for understanding the composition of packet streams arriving at distinct network interfaces, and is useful in diagnosing threats and un-covering vulnerabilities so as to maximise data integrity and system se-curity. Traditional packet classifiers, such as PCAP, have utilised Con-trol Flow Graphs (CFGs) in representing filter sets, due to both their amenability to optimisation, and their inherent structural applicability to the metaphor of decision-based classification. Unfortunately, CFGs do not map well to cooperative processing implementations, and single-threaded CPU-based implementations have proven too slow for real-time classification against multiple arbitrary filters on next generation networks. In this paper, we consider a novel multithreaded classification algorithm, optimised for execution on GPU co-processors, intended to accelerate classification throughput and maximise processing efficien-cy in a highly parallel execution context.
- Full Text:
- Date Issued: 2010
- Authors: Nottingham, Alistair , Irwin, Barry V W
- Date: 2010
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430250 , vital:72677 , https://doi.org/10.1145/1899503.1899529
- Description: In the domain of network security, packet filtering for classification pur-poses is of significant interest. Packet classification provides a mecha-nism for understanding the composition of packet streams arriving at distinct network interfaces, and is useful in diagnosing threats and un-covering vulnerabilities so as to maximise data integrity and system se-curity. Traditional packet classifiers, such as PCAP, have utilised Con-trol Flow Graphs (CFGs) in representing filter sets, due to both their amenability to optimisation, and their inherent structural applicability to the metaphor of decision-based classification. Unfortunately, CFGs do not map well to cooperative processing implementations, and single-threaded CPU-based implementations have proven too slow for real-time classification against multiple arbitrary filters on next generation networks. In this paper, we consider a novel multithreaded classification algorithm, optimised for execution on GPU co-processors, intended to accelerate classification throughput and maximise processing efficien-cy in a highly parallel execution context.
- Full Text:
- Date Issued: 2010
A Framework for the Rapid Development of Anomaly Detection Algorithms in Network Intrusion Detection Systems
- Barnett, Richard J, Irwin, Barry V W
- Authors: Barnett, Richard J , Irwin, Barry V W
- Date: 2009
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428644 , vital:72526 , https://www.researchgate.net/profile/Johan-Van-Niekerk-2/publication/220803295_E-mail_Security_awareness_at_Nelson_Mandela_Metropolitan_University_Registrar's_Division/links/0deec51909304b0ed8000000/E-mail-Security-awareness-at-Nelson-Mandela-Metropolitan-University-Registrars-Division.pdf#page=289
- Description: Most current Network Intrusion Detection Systems (NIDS) perform de-tection by matching traffic to a set of known signatures. These systems have well defined mechanisms for the rapid creation and deployment of new signatures. However, despite their support for anomaly detection, this is usually limited and often requires a full recompilation of the sys-tem to deploy new algorithms.
- Full Text:
- Date Issued: 2009
- Authors: Barnett, Richard J , Irwin, Barry V W
- Date: 2009
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428644 , vital:72526 , https://www.researchgate.net/profile/Johan-Van-Niekerk-2/publication/220803295_E-mail_Security_awareness_at_Nelson_Mandela_Metropolitan_University_Registrar's_Division/links/0deec51909304b0ed8000000/E-mail-Security-awareness-at-Nelson-Mandela-Metropolitan-University-Registrars-Division.pdf#page=289
- Description: Most current Network Intrusion Detection Systems (NIDS) perform de-tection by matching traffic to a set of known signatures. These systems have well defined mechanisms for the rapid creation and deployment of new signatures. However, despite their support for anomaly detection, this is usually limited and often requires a full recompilation of the sys-tem to deploy new algorithms.
- Full Text:
- Date Issued: 2009
Evaluating text preprocessing to improve compression on maillogs
- Otten, Fred, Irwin, Barry V W, Thinyane, Hannah
- Authors: Otten, Fred , Irwin, Barry V W , Thinyane, Hannah
- Date: 2009
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430138 , vital:72668 , https://doi.org/10.1145/1632149.1632157
- Description: Maillogs contain important information about mail which has been sent or received. This information can be used for statistical purposes, to help prevent viruses or to help prevent SPAM. In order to satisfy regula-tions and follow good security practices, maillogs need to be monitored and archived. Since there is a large quantity of data, some form of data reduction is necessary. Data compression programs such as gzip and bzip2 are commonly used to reduce the quantity of data. Text preprocessing can be used to aid the compression of English text files. This paper evaluates whether text preprocessing, particularly word replacement, can be used to improve the compression of maillogs. It presents an algorithm for constructing a dictionary for word replacement and provides the results of experiments conducted using the ppmd, gzip, bzip2 and 7zip programs. These tests show that text prepro-cessing improves data compression on maillogs. Improvements of up to 56 percent in compression time and up to 32 percent in compression ratio are achieved. It also shows that a dictionary may be generated and used on other maillogs to yield reductions within half a percent of the results achieved for the maillog used to generate the dictionary.
- Full Text:
- Date Issued: 2009
- Authors: Otten, Fred , Irwin, Barry V W , Thinyane, Hannah
- Date: 2009
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430138 , vital:72668 , https://doi.org/10.1145/1632149.1632157
- Description: Maillogs contain important information about mail which has been sent or received. This information can be used for statistical purposes, to help prevent viruses or to help prevent SPAM. In order to satisfy regula-tions and follow good security practices, maillogs need to be monitored and archived. Since there is a large quantity of data, some form of data reduction is necessary. Data compression programs such as gzip and bzip2 are commonly used to reduce the quantity of data. Text preprocessing can be used to aid the compression of English text files. This paper evaluates whether text preprocessing, particularly word replacement, can be used to improve the compression of maillogs. It presents an algorithm for constructing a dictionary for word replacement and provides the results of experiments conducted using the ppmd, gzip, bzip2 and 7zip programs. These tests show that text prepro-cessing improves data compression on maillogs. Improvements of up to 56 percent in compression time and up to 32 percent in compression ratio are achieved. It also shows that a dictionary may be generated and used on other maillogs to yield reductions within half a percent of the results achieved for the maillog used to generate the dictionary.
- Full Text:
- Date Issued: 2009
Extending the NFComms: framework for bulk data transfers
- Nottingham, Alastair, Irwin, Barry V W
- Authors: Nottingham, Alastair , Irwin, Barry V W
- Date: 2009
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430164 , vital:72670 , https://doi.org/10.1145/1632149.1632170
- Description: Packet analysis is an important aspect of network security, which typi-cally relies on a flexible packet filtering system to extrapolate important packet information from each processed packet. Packet analysis is a computationally intensive, highly parallelisable task, and as such, clas-sification of large packet sets, such as those collected by a network tel-escope, can require significant processing time. We wish to improve upon this, through parallel classification on a GPU. In this paper, we first consider the OpenCL architecture and its applicability to packet analy-sis. We then introduce a number of packet demultiplexing and routing algorithms, and finally present a discussion on how some of these techniques may be leveraged within a GPGPU context to improve packet classification speeds.
- Full Text:
- Date Issued: 2009
- Authors: Nottingham, Alastair , Irwin, Barry V W
- Date: 2009
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430164 , vital:72670 , https://doi.org/10.1145/1632149.1632170
- Description: Packet analysis is an important aspect of network security, which typi-cally relies on a flexible packet filtering system to extrapolate important packet information from each processed packet. Packet analysis is a computationally intensive, highly parallelisable task, and as such, clas-sification of large packet sets, such as those collected by a network tel-escope, can require significant processing time. We wish to improve upon this, through parallel classification on a GPU. In this paper, we first consider the OpenCL architecture and its applicability to packet analy-sis. We then introduce a number of packet demultiplexing and routing algorithms, and finally present a discussion on how some of these techniques may be leveraged within a GPGPU context to improve packet classification speeds.
- Full Text:
- Date Issued: 2009
Management, Processing and Analysis of Cryptographic Network Protocols
- Cowie, Bradley, Irwin, Barry V W, Barnett, Richard J
- Authors: Cowie, Bradley , Irwin, Barry V W , Barnett, Richard J
- Date: 2009
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428687 , vital:72529 , https://d1wqtxts1xzle7.cloudfront.net/30968790/ISSA2009Proceedings-libre.pdf?1393060231=andresponse-content-disposi-tion=inline%3B+filename%3DAN_ANALYSIS_OF_AUTHENTICATION_FOR_PASSIV.pdfandExpires=1714732172andSignature=Ei8RhR2pCSUNGCNE40DugEyFamcyTxPuuRq9gslD~WGlNqPEgG3FL7VFRQCKXhZBWyAfGRjMtBmNDJ7Sjsgex12WxW9Fj8XdpB7Bfz23FuLc-t2YRM-2joKOHJQLxWJlfZiOzxDvVGZeM3zCHj~f3NUeY1~n6PtVtLzNdL8glIg5dzDTTIE6ms2YlxmnO6JvlzQwOWdHaUbHsZzMGOV19UPtBk-UJzHSq3NRyPe4-XNZQLNK-mEEcMGsLk6nkyXIsW2QJ7gtKW1nNkr6EMkAGSOnDai~pSqzb2imspMnlPRigAPPISrNHO79rP51H9bu1WvbRZv1KVkGvM~sRmfl28A__andKey-Pair-Id=APKAJLOHF5GGSLRBV4ZA#page=499
- Description: The use of cryptographic protocols as a means to provide security to web servers and services at the transport layer, by providing both en-cryption and authentication to data transfer, has become increasingly popular. However, we note that it is rather difficult to perform legitimate analysis, intrusion detection and debugging on cryptographic protocols, as the data that passes through is encrypted. In this paper we assume that we have legitimate access to the data and that we have the private key used in transactions and thus we will be able decrypt the data. The objective is to produce a suitable application framework that allows for easy recovery and secure storage of cryptographic keys; including ap-propriate tools to decapsulate traffic and to decrypt live packet streams or precaptured traffic contained in PCAP files. The resultant processing will then be able to provide a clear-text stream which can be used for further analysis.
- Full Text:
- Date Issued: 2009
- Authors: Cowie, Bradley , Irwin, Barry V W , Barnett, Richard J
- Date: 2009
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428687 , vital:72529 , https://d1wqtxts1xzle7.cloudfront.net/30968790/ISSA2009Proceedings-libre.pdf?1393060231=andresponse-content-disposi-tion=inline%3B+filename%3DAN_ANALYSIS_OF_AUTHENTICATION_FOR_PASSIV.pdfandExpires=1714732172andSignature=Ei8RhR2pCSUNGCNE40DugEyFamcyTxPuuRq9gslD~WGlNqPEgG3FL7VFRQCKXhZBWyAfGRjMtBmNDJ7Sjsgex12WxW9Fj8XdpB7Bfz23FuLc-t2YRM-2joKOHJQLxWJlfZiOzxDvVGZeM3zCHj~f3NUeY1~n6PtVtLzNdL8glIg5dzDTTIE6ms2YlxmnO6JvlzQwOWdHaUbHsZzMGOV19UPtBk-UJzHSq3NRyPe4-XNZQLNK-mEEcMGsLk6nkyXIsW2QJ7gtKW1nNkr6EMkAGSOnDai~pSqzb2imspMnlPRigAPPISrNHO79rP51H9bu1WvbRZv1KVkGvM~sRmfl28A__andKey-Pair-Id=APKAJLOHF5GGSLRBV4ZA#page=499
- Description: The use of cryptographic protocols as a means to provide security to web servers and services at the transport layer, by providing both en-cryption and authentication to data transfer, has become increasingly popular. However, we note that it is rather difficult to perform legitimate analysis, intrusion detection and debugging on cryptographic protocols, as the data that passes through is encrypted. In this paper we assume that we have legitimate access to the data and that we have the private key used in transactions and thus we will be able decrypt the data. The objective is to produce a suitable application framework that allows for easy recovery and secure storage of cryptographic keys; including ap-propriate tools to decapsulate traffic and to decrypt live packet streams or precaptured traffic contained in PCAP files. The resultant processing will then be able to provide a clear-text stream which can be used for further analysis.
- Full Text:
- Date Issued: 2009
An investigation into unintentional information leakage through electronic publication
- Forrester, Jock, Irwin, Barry V W
- Authors: Forrester, Jock , Irwin, Barry V W
- Date: 2005
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428814 , vital:72538 , https://digifors.cs.up.ac.za/issa/2005/Proceedings/Poster/012_Article.pdf
- Description: Organisations are publishing electronic documents on their websites, via email to clients and potentially un-trusted third parties. This trend can be attributed to the ease of use of desktop publishing/editing soft-ware as well as the increasingly connected environment that employ-ees work in. Advanced document editors have features that enable the use of group editing, version control and multi-user authoring. Unfortu-nately these advanced features also have their disadvantages. Metadata used to enable the collaborative features can unintentionally expose confidential data to unauthorised users once the document has been published.
- Full Text:
- Date Issued: 2005
- Authors: Forrester, Jock , Irwin, Barry V W
- Date: 2005
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428814 , vital:72538 , https://digifors.cs.up.ac.za/issa/2005/Proceedings/Poster/012_Article.pdf
- Description: Organisations are publishing electronic documents on their websites, via email to clients and potentially un-trusted third parties. This trend can be attributed to the ease of use of desktop publishing/editing soft-ware as well as the increasingly connected environment that employ-ees work in. Advanced document editors have features that enable the use of group editing, version control and multi-user authoring. Unfortu-nately these advanced features also have their disadvantages. Metadata used to enable the collaborative features can unintentionally expose confidential data to unauthorised users once the document has been published.
- Full Text:
- Date Issued: 2005
Securing Real-time multimedia: A brief survey
- Cloran, Russell, Irwin, Barry V W, Terzoli, Alfredo
- Authors: Cloran, Russell , Irwin, Barry V W , Terzoli, Alfredo
- Date: 2005
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428887 , vital:72543 , https://digifors.cs.up.ac.za/issa/2005/Proceedings/Research/020_Article.pdf
- Description: Voice over IP (VoIP) enables cheaper and easier communication but can be less secure than the traditional TDM network. This paper is a guide to securing VoIP networks using current technologies and best practices. Physical and logical segregation of data and multimedia traf-fic is discussed. Current VoIP analysis tools are described with specific reference to their usefulness as a means of evaluating the quality of a secure VoIP system. Protocol enhancements, such as the Secure Re-al-time Transport Protocol and transport layer protection such as of-fered by IPSec, are discussed and evaluated. Finally, various secure VoIP implementation scenarios are discussed, with configurations combining these security solutions presented in the paper.
- Full Text:
- Date Issued: 2005
- Authors: Cloran, Russell , Irwin, Barry V W , Terzoli, Alfredo
- Date: 2005
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428887 , vital:72543 , https://digifors.cs.up.ac.za/issa/2005/Proceedings/Research/020_Article.pdf
- Description: Voice over IP (VoIP) enables cheaper and easier communication but can be less secure than the traditional TDM network. This paper is a guide to securing VoIP networks using current technologies and best practices. Physical and logical segregation of data and multimedia traf-fic is discussed. Current VoIP analysis tools are described with specific reference to their usefulness as a means of evaluating the quality of a secure VoIP system. Protocol enhancements, such as the Secure Re-al-time Transport Protocol and transport layer protection such as of-fered by IPSec, are discussed and evaluated. Finally, various secure VoIP implementation scenarios are discussed, with configurations combining these security solutions presented in the paper.
- Full Text:
- Date Issued: 2005
Trust on the Web
- Cloran, Russell, Irwin, Barry V W
- Authors: Cloran, Russell , Irwin, Barry V W
- Date: 2005
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428900 , vital:72544 , https://digifors.cs.up.ac.za/issa/2005/Proceedings/Full/025_Article.pdf
- Description: This paper forms a backdrop for work investigating trust on the semantic web. With the mass of information currently available on the web, and the low barrier to entry for the publication of information on the web, it can be difficult to classify the au-thority of information found on the web. We use a case study of a suspected phish-ing scam in South Africa to examine the methods an advanced user may use to veri-fy the authenticity of a web site and the information it published. From this case study, we see that a website which is legitimate may easily appear to be a scam, because of the manner in which information is presented and the failure to use es-tablished industry best practices. We discuss a number of ways in which doubt may have been eliminated. We then discuss how a distributed trust system, as favoured by many researchers in trust on the semantic web, may have been implemented in this case to prove the authenticity of the site without the traditional means involv-ing the high cost of a digital certificate from a recognised Certificate Authority.
- Full Text:
- Date Issued: 2005
- Authors: Cloran, Russell , Irwin, Barry V W
- Date: 2005
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428900 , vital:72544 , https://digifors.cs.up.ac.za/issa/2005/Proceedings/Full/025_Article.pdf
- Description: This paper forms a backdrop for work investigating trust on the semantic web. With the mass of information currently available on the web, and the low barrier to entry for the publication of information on the web, it can be difficult to classify the au-thority of information found on the web. We use a case study of a suspected phish-ing scam in South Africa to examine the methods an advanced user may use to veri-fy the authenticity of a web site and the information it published. From this case study, we see that a website which is legitimate may easily appear to be a scam, because of the manner in which information is presented and the failure to use es-tablished industry best practices. We discuss a number of ways in which doubt may have been eliminated. We then discuss how a distributed trust system, as favoured by many researchers in trust on the semantic web, may have been implemented in this case to prove the authenticity of the site without the traditional means involv-ing the high cost of a digital certificate from a recognised Certificate Authority.
- Full Text:
- Date Issued: 2005
Unlocking the armour: enabling intrusion detection and analysis of encrypted traffic streams
- Authors: Irwin, Barry V W
- Date: 2005
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428845 , vital:72540
- Description: In the interests of maintaining end to end security, increasing volumes of information are being encrypted while in transit. Many organisations and users will make use of secure encrypted protocols for information interchange given an option. The very security that is provided by these transport protocols, such as IPSEC, HTTPS and SSH also acts against the security monitoring of an organisation’s traffic. Intrusion detection systems are no longer easily able to inspect the payload of encrypted protocols. Similarly these protocols can potentially be difficult for securi-ty and network administrators to debug, validate and analyse. This pa-per discusses the need for a means of a trusted third party being able to unpack encrypted data traversing a network and a proposes an ar-chitecture which would enable this to be achieved through the extrac-tion and sharing of the appropriate encipherment tokens, based on the assumption that an organisation has legitimate access to one side of a communication entering or exiting its network. This problem also has particular relevance to honey-net research and for investigators trying to perform real-time monitoring of an intruder which is making use of such a protected protocol. A proof of concept implementation of the proposed architecture is also discussed.
- Full Text:
- Date Issued: 2005
- Authors: Irwin, Barry V W
- Date: 2005
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428845 , vital:72540
- Description: In the interests of maintaining end to end security, increasing volumes of information are being encrypted while in transit. Many organisations and users will make use of secure encrypted protocols for information interchange given an option. The very security that is provided by these transport protocols, such as IPSEC, HTTPS and SSH also acts against the security monitoring of an organisation’s traffic. Intrusion detection systems are no longer easily able to inspect the payload of encrypted protocols. Similarly these protocols can potentially be difficult for securi-ty and network administrators to debug, validate and analyse. This pa-per discusses the need for a means of a trusted third party being able to unpack encrypted data traversing a network and a proposes an ar-chitecture which would enable this to be achieved through the extrac-tion and sharing of the appropriate encipherment tokens, based on the assumption that an organisation has legitimate access to one side of a communication entering or exiting its network. This problem also has particular relevance to honey-net research and for investigators trying to perform real-time monitoring of an intruder which is making use of such a protected protocol. A proof of concept implementation of the proposed architecture is also discussed.
- Full Text:
- Date Issued: 2005
XML digital signature and RDF
- Cloran, Russell, Irwin, Barry V W
- Authors: Cloran, Russell , Irwin, Barry V W
- Date: 2005
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428874 , vital:72542 , https://digifors.cs.up.ac.za/issa/2005/Proceedings/Poster/026_Article.pdf
- Description: The XML Signature working group focuses on the canonicalisation of XML, and the syntax used to sign an XML document. This process focuses on the semantics intro-duced by the XML language itself, but ignores semantics which a particular applica-tion of XML may add. The Resource Description Framework (RDF) is a language for representing information about resources on the Web. RDF has a number of possi-ble serialisations, including an XML serialisation (RDF/XML), popularly used as the format for exchanging RDF data. In general, the order of statements in RDF is not important, and thus the order in which XML tags occur in RDF/XML can vary greatly whilst still preserving semantics. This paper examines some of the issues surround-ing the canonicalisation of RDF/XML and the signing of it, discussing nesting, node identifiers and the ordering of nodes. Existing RDF serialisation formats are consid-ered as case studies of partially canonical RDF formats.
- Full Text:
- Date Issued: 2005
- Authors: Cloran, Russell , Irwin, Barry V W
- Date: 2005
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428874 , vital:72542 , https://digifors.cs.up.ac.za/issa/2005/Proceedings/Poster/026_Article.pdf
- Description: The XML Signature working group focuses on the canonicalisation of XML, and the syntax used to sign an XML document. This process focuses on the semantics intro-duced by the XML language itself, but ignores semantics which a particular applica-tion of XML may add. The Resource Description Framework (RDF) is a language for representing information about resources on the Web. RDF has a number of possi-ble serialisations, including an XML serialisation (RDF/XML), popularly used as the format for exchanging RDF data. In general, the order of statements in RDF is not important, and thus the order in which XML tags occur in RDF/XML can vary greatly whilst still preserving semantics. This paper examines some of the issues surround-ing the canonicalisation of RDF/XML and the signing of it, discussing nesting, node identifiers and the ordering of nodes. Existing RDF serialisation formats are consid-ered as case studies of partially canonical RDF formats.
- Full Text:
- Date Issued: 2005
- «
- ‹
- 1
- ›
- »