Resource Allocation Framework in Fog Computing for the Internet of Things Environments
- Authors: Vambe, William Tichaona
- Date: 2020
- Subjects: Internet of things Cloud computing
- Language: English
- Type: Thesis , Doctoral , PhD (Computer Science)
- Identifier: http://hdl.handle.net/10353/18498 , vital:42575
- Description: Fog computing plays a pivotal role in the Internet of Things (IoT) ecosystem because of its ability to support delay-sensitive tasks, bringing resources from cloud servers closer to the “ground” and support IoT devices that are resource-constrained. Although fog computing offers some benefits such as quick response to requests, geo-distributed data processing and data processing in the proximity of the IoT devices, the exponential increase of IoT devices and large volumes of data being generated has led to a new set of challenges. One such problem is the allocation of resources to IoT tasks to match their computational needs and quality of service (QoS) requirements, whilst meeting both task deadlines and user expectations. Most proposed solutions in existing works suggest task offloading mechanisms where IoT devices would offload their tasks randomly to the fog layer or cloud layer. This helps in minimizing the communication delay; however, most tasks would end up missing their deadlines as many delays are experienced during offloading. This study proposes and introduces a Resource Allocation Scheduler (RAS) at the IoT-Fog gateway, whose goal is to decide where and when a task is to be offloaded, either to the fog layer, or the cloud layer based on their priority needs, computational needs and QoS requirements. The aim directly places work within the communication networks domain, in the transport layer of the Open Systems Interconnection (OSI) model. As such, this study follows the four phases of the top-down approach because of its reusability characteristics. To validate and test the efficiency and effectiveness of the RAS, the fog framework was implemented and evaluated in a simulated smart home setup. The essential metrics that were used to check if round-trip time was minimized are the queuing time, offloading time and throughput for QoS. The results showed that the RAS helps to reduce the round-trip time, increases throughput and leads to improved QoS. Furthermore, the approach addressed the starvation problem, a phenomenon that tends to affect low priority tasks. Most importantly, the results provides evidence that if resource allocation and assignment are appropriately done, round-trip time can be reduced and QoS can be improved in fog computing. The significant contribution of this research is the novel framework which minimizes round-trip time, addresses the starvation problem and improves QoS. Moreover, a literature reviewed paper which was regarded by reviewers as the first, as far as QoS in fog computing is concerned was produced.
- Full Text:
- Authors: Vambe, William Tichaona
- Date: 2020
- Subjects: Internet of things Cloud computing
- Language: English
- Type: Thesis , Doctoral , PhD (Computer Science)
- Identifier: http://hdl.handle.net/10353/18498 , vital:42575
- Description: Fog computing plays a pivotal role in the Internet of Things (IoT) ecosystem because of its ability to support delay-sensitive tasks, bringing resources from cloud servers closer to the “ground” and support IoT devices that are resource-constrained. Although fog computing offers some benefits such as quick response to requests, geo-distributed data processing and data processing in the proximity of the IoT devices, the exponential increase of IoT devices and large volumes of data being generated has led to a new set of challenges. One such problem is the allocation of resources to IoT tasks to match their computational needs and quality of service (QoS) requirements, whilst meeting both task deadlines and user expectations. Most proposed solutions in existing works suggest task offloading mechanisms where IoT devices would offload their tasks randomly to the fog layer or cloud layer. This helps in minimizing the communication delay; however, most tasks would end up missing their deadlines as many delays are experienced during offloading. This study proposes and introduces a Resource Allocation Scheduler (RAS) at the IoT-Fog gateway, whose goal is to decide where and when a task is to be offloaded, either to the fog layer, or the cloud layer based on their priority needs, computational needs and QoS requirements. The aim directly places work within the communication networks domain, in the transport layer of the Open Systems Interconnection (OSI) model. As such, this study follows the four phases of the top-down approach because of its reusability characteristics. To validate and test the efficiency and effectiveness of the RAS, the fog framework was implemented and evaluated in a simulated smart home setup. The essential metrics that were used to check if round-trip time was minimized are the queuing time, offloading time and throughput for QoS. The results showed that the RAS helps to reduce the round-trip time, increases throughput and leads to improved QoS. Furthermore, the approach addressed the starvation problem, a phenomenon that tends to affect low priority tasks. Most importantly, the results provides evidence that if resource allocation and assignment are appropriately done, round-trip time can be reduced and QoS can be improved in fog computing. The significant contribution of this research is the novel framework which minimizes round-trip time, addresses the starvation problem and improves QoS. Moreover, a literature reviewed paper which was regarded by reviewers as the first, as far as QoS in fog computing is concerned was produced.
- Full Text:
A Model for Intrusion Detection in IoT using Machine Learning
- Authors: Nkala, Junior Ruddy
- Date: 2019
- Subjects: Internet of things
- Language: English
- Type: Thesis , Masters , MSc (Computer Science )
- Identifier: http://hdl.handle.net/10353/17180 , vital:40863
- Description: The Internet of Things is an open and comprehensive global network of intelligent objects that have the capacity to auto-organize, share information, data and resources. There are currently over a billion devices connected to the Internet, and this number increases by the day. While these devices make our life easier, safer and healthier, they are expanding the number of attack targets vulnerable to cyber-attacks from potential hackers and malicious software. Therefore, protecting these devices from adversaries and unauthorized access and modification is very important. The purpose of this study is to develop a secure lightweight intrusion and anomaly detection model for IoT to help detect threats in the environment. We propose the use of data mining and machine learning algorithms as a classification technique for detecting abnormal or malicious traffic transmitted between devices due to potential attacks such as DoS, Man-In-Middle and Flooding attacks at the application level. This study makes use of two robust machine learning algorithms, namely the C4.5 Decision Trees and K-means clustering to develop an anomaly detection model. MATLAB Math Simulator was used for implementation. The study conducts a series of experiments in detecting abnormal data and normal data in a dataset that contains gas concentration readings from a number of sensors deployed in an Italian city over a year. Thereafter we examined the classification performance in terms of accuracy of our proposed anomaly detection model. Results drawn from the experiments conducted indicate that the size of the training sample improves classification ability of the proposed model. Our findings noted that the choice of discretization algorithm does matter in the quest for optimal classification performance. The proposed model proved accurate in detecting anomalies in IoT, and classifying between normal and abnormal data. The proposed model has a classification accuracy of 96.51% which proved to be higher compared to other algorithms such as the Naïve Bayes. The model proved to be lightweight and efficient in-terms of being faster at training and testing as compared to Artificial Neural Networks. The conclusions drawn from this research are a perspective from a novice machine learning researcher with valuable recommendations that ensure optimal classification of normal and abnormal IoT data.
- Full Text:
- Authors: Nkala, Junior Ruddy
- Date: 2019
- Subjects: Internet of things
- Language: English
- Type: Thesis , Masters , MSc (Computer Science )
- Identifier: http://hdl.handle.net/10353/17180 , vital:40863
- Description: The Internet of Things is an open and comprehensive global network of intelligent objects that have the capacity to auto-organize, share information, data and resources. There are currently over a billion devices connected to the Internet, and this number increases by the day. While these devices make our life easier, safer and healthier, they are expanding the number of attack targets vulnerable to cyber-attacks from potential hackers and malicious software. Therefore, protecting these devices from adversaries and unauthorized access and modification is very important. The purpose of this study is to develop a secure lightweight intrusion and anomaly detection model for IoT to help detect threats in the environment. We propose the use of data mining and machine learning algorithms as a classification technique for detecting abnormal or malicious traffic transmitted between devices due to potential attacks such as DoS, Man-In-Middle and Flooding attacks at the application level. This study makes use of two robust machine learning algorithms, namely the C4.5 Decision Trees and K-means clustering to develop an anomaly detection model. MATLAB Math Simulator was used for implementation. The study conducts a series of experiments in detecting abnormal data and normal data in a dataset that contains gas concentration readings from a number of sensors deployed in an Italian city over a year. Thereafter we examined the classification performance in terms of accuracy of our proposed anomaly detection model. Results drawn from the experiments conducted indicate that the size of the training sample improves classification ability of the proposed model. Our findings noted that the choice of discretization algorithm does matter in the quest for optimal classification performance. The proposed model proved accurate in detecting anomalies in IoT, and classifying between normal and abnormal data. The proposed model has a classification accuracy of 96.51% which proved to be higher compared to other algorithms such as the Naïve Bayes. The model proved to be lightweight and efficient in-terms of being faster at training and testing as compared to Artificial Neural Networks. The conclusions drawn from this research are a perspective from a novice machine learning researcher with valuable recommendations that ensure optimal classification of normal and abnormal IoT data.
- Full Text:
Ontological Model for Xhosa Beadwork in Marginalised Rural Communities: A Case of the Eastern Cape
- Authors: Tinarwo, Loyd
- Date: 2019
- Subjects: Ontology Beadwork
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10353/15749 , vital:40516
- Description: In South Africa, computational ontologies have gained traction and are increasingly viewed as one of the viable solutions to address the problem of fragmented and unstructured nature of indigenous knowledge (IK) particularly in the marginalized rural communities. The continual existence of IK in tacit form has impeded the use of IK as a potential resource that can catalyze socio-economic and cultural development in South Africa. This study was, therefore, designed to address part of this challenge by developing a Xhosa Beadwork Ontology (XBO) with the goal of structuring the domain knowledge into a reusable body of knowledge. Such a reusable body of knowledge promotes efficient sharing of a common understanding of Xhosa Beadwork in a computational form. The XBO is in OWL 2 DL. The development of the XBO was informed by the NeOn methodology and the iterativeincremental ontology development life cycle within the ambit of Action Research (AR). The XBO was developed around personal ornamentation Xhosa Beadwork consisting of Necklace, Headband, Armlet, Waistband, Bracelet, and Anklet. In this study, the XBO was evaluated focused on ascertaining that the created ontology is a comprehensive representation of the Xhosa Beadwork and is of the required standard. In addition, the XBO was documented into a human understandable and readable resource and was published. The outcome of the study has indicated that the XBO is an adequate, shareable and reusable semantic artifact that can indeed support the formalization and preservation of IK in the domain of Xhosa Beadwork
- Full Text:
- Authors: Tinarwo, Loyd
- Date: 2019
- Subjects: Ontology Beadwork
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10353/15749 , vital:40516
- Description: In South Africa, computational ontologies have gained traction and are increasingly viewed as one of the viable solutions to address the problem of fragmented and unstructured nature of indigenous knowledge (IK) particularly in the marginalized rural communities. The continual existence of IK in tacit form has impeded the use of IK as a potential resource that can catalyze socio-economic and cultural development in South Africa. This study was, therefore, designed to address part of this challenge by developing a Xhosa Beadwork Ontology (XBO) with the goal of structuring the domain knowledge into a reusable body of knowledge. Such a reusable body of knowledge promotes efficient sharing of a common understanding of Xhosa Beadwork in a computational form. The XBO is in OWL 2 DL. The development of the XBO was informed by the NeOn methodology and the iterativeincremental ontology development life cycle within the ambit of Action Research (AR). The XBO was developed around personal ornamentation Xhosa Beadwork consisting of Necklace, Headband, Armlet, Waistband, Bracelet, and Anklet. In this study, the XBO was evaluated focused on ascertaining that the created ontology is a comprehensive representation of the Xhosa Beadwork and is of the required standard. In addition, the XBO was documented into a human understandable and readable resource and was published. The outcome of the study has indicated that the XBO is an adequate, shareable and reusable semantic artifact that can indeed support the formalization and preservation of IK in the domain of Xhosa Beadwork
- Full Text:
An analysis on the use of web-based ontology to support ubiquitous learning in South African secondary schools
- Bamigboye, Oluwatosin Omotoyosi
- Authors: Bamigboye, Oluwatosin Omotoyosi
- Date: 2018
- Subjects: Internet in education World Wide Web Ontology
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10353/12859 , vital:39385
- Description: There is certainly a need to upgrade our educational system and this can be done through technology-enhanced learning. Technology-enhanced learning can be achieved by developing a web-based ontology e-learning platform, which allows learning to take place ubiquitously. In achieving this task, this research is focusing on analyzing the use of web-based ontology to support the design and implementation of a ubiquitous learning system in South Africa. The implemented web-based ontology e-learning system was deployed and tested. The system testing was done on two variables (Information retrieval and Scalability) of localhost and client system, with the following testing metrics: time is taken for information retrieval, request time to process the request, transfer rate, time localhost receives request, time to respond, roundtrip time for request and network usage. The metrics testing was achieved using apache benchmarking console and gnuplot application to generate the data captured and performance graph, while Wireshark was also used to test/analyze round trip time and network usage through the deployed system. The results of the findings in this study show that the relationship between student and learning content becomes explicit when using ontology technology in searching, organizing, gathering and development content. Results obtained from information retrieval show that the transfer rate of information on localhost for 100 request @ current level of 5 is 37169.89kb/s, while on the client’s system, the transfer rate was 48494.36kb/s which was due to multiple request on the client’s side. Results obtained on scalability shows the round trip time which was (time to respond-time to request). The longest roundtrip was 8 seconds as a result of network being congested with multiple packets request from various sources trying to access the localhost at the same time while the fastest was 1 seconds. The implications of this results show that web-based ontology e-learning system has a positive input to teaching and learning processes in schools for content retrieval and network usage. The system furthermore shows the relationship adopted by learners and teachers, and also the match needs that arise in between. The contribution of this study adds to the existing discoveries on the use of web-based and knowledge-based ontology.
- Full Text:
- Authors: Bamigboye, Oluwatosin Omotoyosi
- Date: 2018
- Subjects: Internet in education World Wide Web Ontology
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10353/12859 , vital:39385
- Description: There is certainly a need to upgrade our educational system and this can be done through technology-enhanced learning. Technology-enhanced learning can be achieved by developing a web-based ontology e-learning platform, which allows learning to take place ubiquitously. In achieving this task, this research is focusing on analyzing the use of web-based ontology to support the design and implementation of a ubiquitous learning system in South Africa. The implemented web-based ontology e-learning system was deployed and tested. The system testing was done on two variables (Information retrieval and Scalability) of localhost and client system, with the following testing metrics: time is taken for information retrieval, request time to process the request, transfer rate, time localhost receives request, time to respond, roundtrip time for request and network usage. The metrics testing was achieved using apache benchmarking console and gnuplot application to generate the data captured and performance graph, while Wireshark was also used to test/analyze round trip time and network usage through the deployed system. The results of the findings in this study show that the relationship between student and learning content becomes explicit when using ontology technology in searching, organizing, gathering and development content. Results obtained from information retrieval show that the transfer rate of information on localhost for 100 request @ current level of 5 is 37169.89kb/s, while on the client’s system, the transfer rate was 48494.36kb/s which was due to multiple request on the client’s side. Results obtained on scalability shows the round trip time which was (time to respond-time to request). The longest roundtrip was 8 seconds as a result of network being congested with multiple packets request from various sources trying to access the localhost at the same time while the fastest was 1 seconds. The implications of this results show that web-based ontology e-learning system has a positive input to teaching and learning processes in schools for content retrieval and network usage. The system furthermore shows the relationship adopted by learners and teachers, and also the match needs that arise in between. The contribution of this study adds to the existing discoveries on the use of web-based and knowledge-based ontology.
- Full Text:
A mobile based user centred integrated remote patient monitoring framework for low resource settings
- Authors: Ndlovu, Nkanyiso
- Date: 2017
- Subjects: Health services accessibility Medical telematics Patient monitoring -- Remote sensing
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10353/8563 , vital:33128
- Description: There is a gap in healthcare service delivery within low resource settings of South Africa. These areas are under-serviced because of poor health infrastructure and few available medical experts. This contributes immensely to poor health care delivery especially to chronically ill diabetic patients and increases mortality rates. However, innovative remote patient monitoring (RPM) systems have been developed to curb the above challenge in recent years. Unfortunately, most of these systems are standalone and are incompatible with one another. Most of them relay on Internet for connectivity which is imminent in low resource settings. This makes continuity of care of chronic ill patients a great challenge. Additional, the efficacy and feasibility of RPM using mobile phones in low resource settings of South Africa are still unknown. It was also noted that none of these systems have been developed for a clinical trial. The goal of this study was to provide a standard framework that allows optimal design of mobile RPM systems which are interoperable. The objectives were to investigate the RPM system efficacy and reliability in low resource settings and determine its effects on clinical management, self-care and health outcomes. The framework was validated with a clinical trial to remotely monitor diabetic adults in Limpopo province of South Africa. A prototype system was developed based on sound user centric design process and enterprise architectural principles to remotely monitor diabetic elderly patients using cellular technologies and existing hospital infrastructure. It was evaluated using a controlled, randomized clinical trial for 6 months. There were 120 patients who took part in the study and were categorized into two groups, the intervention Group X and the control Group Y. Each group comprised of 60 participants. Evidence from this study justified the feasibility and possibility of long term implementation of RPM system to cater for chronic ill patients in low resource settings worldwide. Results showed that the self-care and normal blood glucose levels improved for both groups whereas quality of life improved only for Group X. It was shown that extensive self-care knowledge with the help of RPM system improved self-care and helped normalize their glucose levels. The hospital admissions and mortality between the two groups did not differ much. However, the intervention group had more hospital visits than the control group because the participants were requested to visit the hospitals in case of emergency. The users perceived the RPM system as feasible and effective way of clinical management and self-care. Due to wide acceptance, some patients were even willing to continue using the system after the trial. Home measurements proved to be reliable and helped improve self-care. In future, a standardized and unified framework based on rule set would provide comprehensive remote healthcare allowing continuous patient monitoring at a reduced overall cost thereby decreasing mortality rates.
- Full Text:
A mobile based user centred integrated remote patient monitoring framework for low resource settings
- Authors: Ndlovu, Nkanyiso
- Date: 2017
- Subjects: Health services accessibility Medical telematics Patient monitoring -- Remote sensing
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10353/8563 , vital:33128
- Description: There is a gap in healthcare service delivery within low resource settings of South Africa. These areas are under-serviced because of poor health infrastructure and few available medical experts. This contributes immensely to poor health care delivery especially to chronically ill diabetic patients and increases mortality rates. However, innovative remote patient monitoring (RPM) systems have been developed to curb the above challenge in recent years. Unfortunately, most of these systems are standalone and are incompatible with one another. Most of them relay on Internet for connectivity which is imminent in low resource settings. This makes continuity of care of chronic ill patients a great challenge. Additional, the efficacy and feasibility of RPM using mobile phones in low resource settings of South Africa are still unknown. It was also noted that none of these systems have been developed for a clinical trial. The goal of this study was to provide a standard framework that allows optimal design of mobile RPM systems which are interoperable. The objectives were to investigate the RPM system efficacy and reliability in low resource settings and determine its effects on clinical management, self-care and health outcomes. The framework was validated with a clinical trial to remotely monitor diabetic adults in Limpopo province of South Africa. A prototype system was developed based on sound user centric design process and enterprise architectural principles to remotely monitor diabetic elderly patients using cellular technologies and existing hospital infrastructure. It was evaluated using a controlled, randomized clinical trial for 6 months. There were 120 patients who took part in the study and were categorized into two groups, the intervention Group X and the control Group Y. Each group comprised of 60 participants. Evidence from this study justified the feasibility and possibility of long term implementation of RPM system to cater for chronic ill patients in low resource settings worldwide. Results showed that the self-care and normal blood glucose levels improved for both groups whereas quality of life improved only for Group X. It was shown that extensive self-care knowledge with the help of RPM system improved self-care and helped normalize their glucose levels. The hospital admissions and mortality between the two groups did not differ much. However, the intervention group had more hospital visits than the control group because the participants were requested to visit the hospitals in case of emergency. The users perceived the RPM system as feasible and effective way of clinical management and self-care. Due to wide acceptance, some patients were even willing to continue using the system after the trial. Home measurements proved to be reliable and helped improve self-care. In future, a standardized and unified framework based on rule set would provide comprehensive remote healthcare allowing continuous patient monitoring at a reduced overall cost thereby decreasing mortality rates.
- Full Text:
The investigation of the role and the efficacy of learning technologies towards community skill development
- Authors: Masikisiki, Baphumelele
- Date: 2017
- Subjects: Web-based instruction Computer-assisted instruction
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10353/5972 , vital:29460
- Description: Research has revealed that during systems design and development of e-learning technologies there is a tendency of neglecting the needs of end users and focus on the design process and the technology factors, and this has traditionally been the reason for poor usability of otherwise well conceptualized systems, as a result a number of IT-based learning tools ended up not being usable and not being utilized effectively. This study aims to investigate the impact of e-learning technologies, how people perceive the usage of e-learning technologies towards community skill development. An evaluation of four different e-learning technologies was conducted to investigate the role and efficacy of e-learning technologies within the surrounding communities. Data was analyzed as nominal data using IBM Statistical Package for Social Sciences (SPSS) software 24. Descriptive analysis, frequency, reliability and correctional analysis and also measures of central tendency were computed. Reliability was evaluated for assessing the internal consistency of the items using Cronbach’s alpha. To analyze the relationship between variables matrices of Pearson’s correlation was used. Pearson’s correlation can only be accepted when the significant effect (P>.05), this indicates that there is a positive or a negative relationship between two variables, if these conditions are not met then the proposed correlation or hypothesis can be rejected. Results indicate a poor perception and poor acceptance of e-learning technologies due to a number of factors, these factors include lack of computer-self efficacy which leads to computer anxiety, affordability of internet connectivity which leads to inaccessible of e-learning technologies. The findings also indicated that LAMS was found to be less useable and less useful by a number of students. However, students who enjoy working in groups found LAMS to be useable because it was supporting their preferred learning style, whereas individualistic students preferred Moodle and Dokeos because it was supporting their personal preferences and assessment styles. Having understood all the characteristics of learning tools, relevant learning technologies that are suitable for students can then be recommended.
- Full Text:
- Authors: Masikisiki, Baphumelele
- Date: 2017
- Subjects: Web-based instruction Computer-assisted instruction
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10353/5972 , vital:29460
- Description: Research has revealed that during systems design and development of e-learning technologies there is a tendency of neglecting the needs of end users and focus on the design process and the technology factors, and this has traditionally been the reason for poor usability of otherwise well conceptualized systems, as a result a number of IT-based learning tools ended up not being usable and not being utilized effectively. This study aims to investigate the impact of e-learning technologies, how people perceive the usage of e-learning technologies towards community skill development. An evaluation of four different e-learning technologies was conducted to investigate the role and efficacy of e-learning technologies within the surrounding communities. Data was analyzed as nominal data using IBM Statistical Package for Social Sciences (SPSS) software 24. Descriptive analysis, frequency, reliability and correctional analysis and also measures of central tendency were computed. Reliability was evaluated for assessing the internal consistency of the items using Cronbach’s alpha. To analyze the relationship between variables matrices of Pearson’s correlation was used. Pearson’s correlation can only be accepted when the significant effect (P>.05), this indicates that there is a positive or a negative relationship between two variables, if these conditions are not met then the proposed correlation or hypothesis can be rejected. Results indicate a poor perception and poor acceptance of e-learning technologies due to a number of factors, these factors include lack of computer-self efficacy which leads to computer anxiety, affordability of internet connectivity which leads to inaccessible of e-learning technologies. The findings also indicated that LAMS was found to be less useable and less useful by a number of students. However, students who enjoy working in groups found LAMS to be useable because it was supporting their preferred learning style, whereas individualistic students preferred Moodle and Dokeos because it was supporting their personal preferences and assessment styles. Having understood all the characteristics of learning tools, relevant learning technologies that are suitable for students can then be recommended.
- Full Text:
Using data mining techniques for the prediction of student dropouts from university science programs
- Authors: Vambe, William Tichaona
- Date: 2016
- Subjects: Data mining Dropout behavior, Prediction of
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10353/12314 , vital:39252
- Description: Data Mining has taken a center stage in education for addressing student dropout challenges as it has become one of the major threat affecting Higher Educational Institutes (HEIs). Being able to predict students who are likely to dropout helps the university to assist those facing challenges early. This will results in producing more graduates with the intellectual capital who will provide skills in the industries, hence addressing the major challenge of skill shortage being faced in South Africa. Studies and researches as purported in literature have been done to address this major threat of dropout challenge by using the theoretical approach which banked on Tinto’s model, followed by the traditional and statistical approach. However, the two lacked accuracy and the automation aspect which makes them difficult and time-consuming to use as they require to be tested periodically for them to be validated. Recently data mining has become a vital tool for predicting non-linear phenomenon including where there is missing data and bringing about accuracy and automation aspect. Data mining usefulness and reliability assessment in education made it possible to be used for prediction by different researchers. As such this research used data mining approach that integrates classification and prediction techniques to analyze student academic data at the University of Fort Hare to create a model for student dropout using preentry data and university academic performance of each student. Following Knowledge Discovery from Database (KDD) framework, data for the students enrolled in the Bachelor of Science programs between 2003 and 2014 was selected. It went through preprocessing and transformation as to deal with the missing data and noise data. Classification algorithms were then used for student characterization. Decision trees (J48) which are found in Weka software were used to build the model for data mining and prediction. The reason for choosing decision trees was it’s ability to deal with textual, nominal and numeric data as was the case with our input data and because they have good precision.The model was then trained using a train data set, validated and evaluated with another data set. Experimental results demonstrations that data mining is useful in predicting students who have chances to drop out. A critical analysis of correctly classifying instances, the confusion matrix and ROC area shows that the model can correctly classify and predict those who are likely to dropout. The model accuracy was 66percent which is a good percentage as supported in literature which means the results produced can be reliably used for assessment and make strategic decisions. Furthermore, the model took a matter of seconds to compute the results when supplied with 400 instances which prove that it is effective and efficient. Grounding our conclusion from these experimental results, this research proved that Data Mining is useful for bringing about automation, accuracy in prediction of student dropouts and the results can be reliably depended on for decision making by faculty managers who are the decision makers.
- Full Text:
Using data mining techniques for the prediction of student dropouts from university science programs
- Authors: Vambe, William Tichaona
- Date: 2016
- Subjects: Data mining Dropout behavior, Prediction of
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10353/12314 , vital:39252
- Description: Data Mining has taken a center stage in education for addressing student dropout challenges as it has become one of the major threat affecting Higher Educational Institutes (HEIs). Being able to predict students who are likely to dropout helps the university to assist those facing challenges early. This will results in producing more graduates with the intellectual capital who will provide skills in the industries, hence addressing the major challenge of skill shortage being faced in South Africa. Studies and researches as purported in literature have been done to address this major threat of dropout challenge by using the theoretical approach which banked on Tinto’s model, followed by the traditional and statistical approach. However, the two lacked accuracy and the automation aspect which makes them difficult and time-consuming to use as they require to be tested periodically for them to be validated. Recently data mining has become a vital tool for predicting non-linear phenomenon including where there is missing data and bringing about accuracy and automation aspect. Data mining usefulness and reliability assessment in education made it possible to be used for prediction by different researchers. As such this research used data mining approach that integrates classification and prediction techniques to analyze student academic data at the University of Fort Hare to create a model for student dropout using preentry data and university academic performance of each student. Following Knowledge Discovery from Database (KDD) framework, data for the students enrolled in the Bachelor of Science programs between 2003 and 2014 was selected. It went through preprocessing and transformation as to deal with the missing data and noise data. Classification algorithms were then used for student characterization. Decision trees (J48) which are found in Weka software were used to build the model for data mining and prediction. The reason for choosing decision trees was it’s ability to deal with textual, nominal and numeric data as was the case with our input data and because they have good precision.The model was then trained using a train data set, validated and evaluated with another data set. Experimental results demonstrations that data mining is useful in predicting students who have chances to drop out. A critical analysis of correctly classifying instances, the confusion matrix and ROC area shows that the model can correctly classify and predict those who are likely to dropout. The model accuracy was 66percent which is a good percentage as supported in literature which means the results produced can be reliably used for assessment and make strategic decisions. Furthermore, the model took a matter of seconds to compute the results when supplied with 400 instances which prove that it is effective and efficient. Grounding our conclusion from these experimental results, this research proved that Data Mining is useful for bringing about automation, accuracy in prediction of student dropouts and the results can be reliably depended on for decision making by faculty managers who are the decision makers.
- Full Text:
Investigation of the NFC technology for mobile payments and the development of a prototype payment application in the context of marginalized rural areas
- Authors: Gurajena, Caroline
- Date: 2014
- Subjects: Mobile commerce -- South Africa Mobile communication systems -- South Africa
- Language: English
- Type: Thesis , Doctoral , DPhil
- Identifier: http://hdl.handle.net/10353/14071 , vital:39802
- Description: The Internet of Things (IoT) environment involves the interaction of numerous ‘things’. These ‘things’ are embedded with different kinds of technologies such as RFID technology, NFC technology and sensors. This causes IoT to bring into play many security risks apart from the ones that already exist in the current Internet, for example, embedded RFID tags can be easily triggered and send their content which could be private information. IoT also introduces internal attacks such as on-off attack, bad mouthing and white washing attacks into networks that already exist. These internal attacks cannot be solved by hard security mechanisms such as cryptographic algorithms and firewalls because they guarantee total trust. This eliminates uncertainty which should always be available where trust exist. That is, hard security mechanisms enable IoT ‘things’ to either trust another ‘thing’ completely or not and this makes them unsuitable for the IoT environment. When objects in any network are communicating, there is some element of uncertainty. Also, hard security mechanisms such as public key cryptography cause communication overheard in the already resource-constrained IoT devices and these conventional cryptography methods cannot deal with internal attacks. This brings about the need for a middleware that includes functions that will manage trust, privacy and security issues of all data exchange, communications and network connections. For IoT to be successful, the provision of trust, security and privacy measures are essential. Trust management may enhance the adoption and security measures in IoT. Trust helps in identifying trustworthy ‘things’ in the network and give ‘things’ in the network the ability to reason in all aspects concerning trust in the environment. Trust can be administered through a trust management model. This research notes that most of the trust models that have been proposed fail to address scalability challenges and lack suitable computation methods. It is on that premise that this research focuses on developing a suitable trust model for the IoT environment. The research also introduces new ways of creating relationships in IoT. This enables the creation of new cooperation opportunities in the environment. In overall, this research aimed to design and develop a generic trust and authority delegation model for the heterogonous IoT environment that is scalable and generalized to cater for the heterogeneous IoT environment. This research was conducted in three phases. The first phase reviewed literature in order to identify outstanding issues in IoT trust management and also identify the suitable computational method. This provided a critical analysis of different computational methods highlighting their advantages and limitations. In the second phase of the research, the proposed trust model was designed and tested. In the last phase, the feasibility of the proposed model was evaluated. The proposed model is based on fuzzy logic. Fuzzy logic was selected for trust computation because it is able to imitate the process of the human mind through the use of linguistic variables and it can handle uncertainty. The proposed model was tested in a simulated environment. The simulation results showed that the proposed model can identify selfish and malicious entities effectively. The results also showed that the model was able to deal with different types of behaviours of entities. The testing proved that the proposed trust model can support decision making in IoT based on trust. The results from the evaluation show that this research ameliorates the design and development of trust management solutions for the IoT environment.
- Full Text:
- Authors: Gurajena, Caroline
- Date: 2014
- Subjects: Mobile commerce -- South Africa Mobile communication systems -- South Africa
- Language: English
- Type: Thesis , Doctoral , DPhil
- Identifier: http://hdl.handle.net/10353/14071 , vital:39802
- Description: The Internet of Things (IoT) environment involves the interaction of numerous ‘things’. These ‘things’ are embedded with different kinds of technologies such as RFID technology, NFC technology and sensors. This causes IoT to bring into play many security risks apart from the ones that already exist in the current Internet, for example, embedded RFID tags can be easily triggered and send their content which could be private information. IoT also introduces internal attacks such as on-off attack, bad mouthing and white washing attacks into networks that already exist. These internal attacks cannot be solved by hard security mechanisms such as cryptographic algorithms and firewalls because they guarantee total trust. This eliminates uncertainty which should always be available where trust exist. That is, hard security mechanisms enable IoT ‘things’ to either trust another ‘thing’ completely or not and this makes them unsuitable for the IoT environment. When objects in any network are communicating, there is some element of uncertainty. Also, hard security mechanisms such as public key cryptography cause communication overheard in the already resource-constrained IoT devices and these conventional cryptography methods cannot deal with internal attacks. This brings about the need for a middleware that includes functions that will manage trust, privacy and security issues of all data exchange, communications and network connections. For IoT to be successful, the provision of trust, security and privacy measures are essential. Trust management may enhance the adoption and security measures in IoT. Trust helps in identifying trustworthy ‘things’ in the network and give ‘things’ in the network the ability to reason in all aspects concerning trust in the environment. Trust can be administered through a trust management model. This research notes that most of the trust models that have been proposed fail to address scalability challenges and lack suitable computation methods. It is on that premise that this research focuses on developing a suitable trust model for the IoT environment. The research also introduces new ways of creating relationships in IoT. This enables the creation of new cooperation opportunities in the environment. In overall, this research aimed to design and develop a generic trust and authority delegation model for the heterogonous IoT environment that is scalable and generalized to cater for the heterogeneous IoT environment. This research was conducted in three phases. The first phase reviewed literature in order to identify outstanding issues in IoT trust management and also identify the suitable computational method. This provided a critical analysis of different computational methods highlighting their advantages and limitations. In the second phase of the research, the proposed trust model was designed and tested. In the last phase, the feasibility of the proposed model was evaluated. The proposed model is based on fuzzy logic. Fuzzy logic was selected for trust computation because it is able to imitate the process of the human mind through the use of linguistic variables and it can handle uncertainty. The proposed model was tested in a simulated environment. The simulation results showed that the proposed model can identify selfish and malicious entities effectively. The results also showed that the model was able to deal with different types of behaviours of entities. The testing proved that the proposed trust model can support decision making in IoT based on trust. The results from the evaluation show that this research ameliorates the design and development of trust management solutions for the IoT environment.
- Full Text:
Modelling false positive reduction in maritime object detection
- Authors: Nkele, Nosiphiwo
- Date: 20xx
- Subjects: Computer vision Neural networks (Computer science)
- Language: English
- Type: Thesis , Masters , MSc (Computer Science )
- Identifier: http://hdl.handle.net/10353/17168 , vital:40862
- Description: Target detection has become a very significant research area in computer vision with its applications in military, maritime surveillance, and defense and security. Maritime target detection during critical sea conditions produces a number of false positives when using the existing algorithms due to sea waves, dynamic nature of the ocean, camera motion, sea glint, sensor noise, sea spray, swell and the presence of birds. The main question that has been addressed in this research is how can object detection be improved in maritime environment by reducing false positives and promoting detection rate. Most of Previous work on object detection still fails to address the problem of false positives and false negatives due to background clutter. Most of the researchers tried to reduce false positives by applying filters but filtering degrades the quality of an image leading to more false alarms during detection. As much as radar technology has previously been the most utilized method, it still fails to detect very small objects and it may be applied in special circumstances. In trying to improve the implementation of target detection in maritime, empirical research method was proposed to answer questions about existing target detection algorithms and techniques used to reduce false positives in object detection. Visible images were retrained on a pre-trained Faster R-CNN with inception v2. The pre-trained model was retrained on five different sample data with increasing size, however for the last two samples the data was duplicated to increase size. For testing purposes 20 test images were utilized to evaluate all the models. The results of this study showed that the deep learning method used performed best in detecting maritime vessels and the increase of dataset improved detection performance and false positives were reduced. The duplication of images did not yield the best results; however, the results were promising for the first three models with increasing data.
- Full Text:
- Authors: Nkele, Nosiphiwo
- Date: 20xx
- Subjects: Computer vision Neural networks (Computer science)
- Language: English
- Type: Thesis , Masters , MSc (Computer Science )
- Identifier: http://hdl.handle.net/10353/17168 , vital:40862
- Description: Target detection has become a very significant research area in computer vision with its applications in military, maritime surveillance, and defense and security. Maritime target detection during critical sea conditions produces a number of false positives when using the existing algorithms due to sea waves, dynamic nature of the ocean, camera motion, sea glint, sensor noise, sea spray, swell and the presence of birds. The main question that has been addressed in this research is how can object detection be improved in maritime environment by reducing false positives and promoting detection rate. Most of Previous work on object detection still fails to address the problem of false positives and false negatives due to background clutter. Most of the researchers tried to reduce false positives by applying filters but filtering degrades the quality of an image leading to more false alarms during detection. As much as radar technology has previously been the most utilized method, it still fails to detect very small objects and it may be applied in special circumstances. In trying to improve the implementation of target detection in maritime, empirical research method was proposed to answer questions about existing target detection algorithms and techniques used to reduce false positives in object detection. Visible images were retrained on a pre-trained Faster R-CNN with inception v2. The pre-trained model was retrained on five different sample data with increasing size, however for the last two samples the data was duplicated to increase size. For testing purposes 20 test images were utilized to evaluate all the models. The results of this study showed that the deep learning method used performed best in detecting maritime vessels and the increase of dataset improved detection performance and false positives were reduced. The duplication of images did not yield the best results; however, the results were promising for the first three models with increasing data.
- Full Text:
- «
- ‹
- 1
- ›
- »