Synthesis, X-Ray characterization, spectroscopic and Hirshfeld surface analysis of dimeric metal centers featuring phenacyl-esters
- Authors: Qomfo, Vuyiseka
- Date: 2024-12
- Subjects: Spectrum analysis , Spectroscopic imaging , Diagnostic imaging -- Digital techniques
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/69426 , vital:77254
- Description: In this study, the synthesis and characterization of carboxylate paddlewheel copper complexes were investigated and reported. The complexes consist of O-, and N- donor ligands which coordinated in the apical positions of the copper (II) paddlewheel complexes. The primary focus was the investigation of the influence of the incoming substituents on the structure; more especially with regards to the spectral properties and thermal properties of the synthesized compounds. Synthesized complexes ranged from simple mononuclear complexes and dinuclear dimers to supramolecular 1D networks and a tetranuclear copper (II) compound. Characterization of complexes was done using analytical, and spectroscopic techniques such as single crystal diffraction studies, FT-IR spectroscopy, thermal analysis and Hirshfeld surface analysis. Structural analysis of the mononuclear complex obtained in the reaction of the Cu2(o-CH3-PhCO2)4(THF)2 with the ligand ,2-oxo-phenylethylnicotinate, revealed a square-planar geometry. The series of dinuclear paddlewheel complexes obtained with ligands (L = THF (1), C4H8O (2), C14H11NO3 (3)) revealed a square pyramidal geometry with the methyl-substituted phenyl carboxylate groups bridging the two copper atoms in the syn-syn coordination mode. Extended supramolecular complexes were synthesized via the reaction of three synthesized structurally bifunctional organic ligands and the tetrakis(μ-carboxylato-O,O)dicopper(II) core. Two of the six reactions synthesized successfully to form paddlewheel cage type structures; resulting in dinuclear paddlewheel complexes with four carboxylate ligands occupying the equatorial positions and the bifunctional ligands coordinating in the apical positions. Four of the nine reactions produced mononuclear copper complexes. Due to the inconsistent power supply because of load-shedding, the other three crystals synthesized could not be confirmed by single-crystal diffraction before the submission of this thesis. , Thesis (MSc) -- Faculty of Science, School of Biomolecular & Chemical Sciences, 2024
- Full Text:
- Date Issued: 2024-12
- Authors: Qomfo, Vuyiseka
- Date: 2024-12
- Subjects: Spectrum analysis , Spectroscopic imaging , Diagnostic imaging -- Digital techniques
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/69426 , vital:77254
- Description: In this study, the synthesis and characterization of carboxylate paddlewheel copper complexes were investigated and reported. The complexes consist of O-, and N- donor ligands which coordinated in the apical positions of the copper (II) paddlewheel complexes. The primary focus was the investigation of the influence of the incoming substituents on the structure; more especially with regards to the spectral properties and thermal properties of the synthesized compounds. Synthesized complexes ranged from simple mononuclear complexes and dinuclear dimers to supramolecular 1D networks and a tetranuclear copper (II) compound. Characterization of complexes was done using analytical, and spectroscopic techniques such as single crystal diffraction studies, FT-IR spectroscopy, thermal analysis and Hirshfeld surface analysis. Structural analysis of the mononuclear complex obtained in the reaction of the Cu2(o-CH3-PhCO2)4(THF)2 with the ligand ,2-oxo-phenylethylnicotinate, revealed a square-planar geometry. The series of dinuclear paddlewheel complexes obtained with ligands (L = THF (1), C4H8O (2), C14H11NO3 (3)) revealed a square pyramidal geometry with the methyl-substituted phenyl carboxylate groups bridging the two copper atoms in the syn-syn coordination mode. Extended supramolecular complexes were synthesized via the reaction of three synthesized structurally bifunctional organic ligands and the tetrakis(μ-carboxylato-O,O)dicopper(II) core. Two of the six reactions synthesized successfully to form paddlewheel cage type structures; resulting in dinuclear paddlewheel complexes with four carboxylate ligands occupying the equatorial positions and the bifunctional ligands coordinating in the apical positions. Four of the nine reactions produced mononuclear copper complexes. Due to the inconsistent power supply because of load-shedding, the other three crystals synthesized could not be confirmed by single-crystal diffraction before the submission of this thesis. , Thesis (MSc) -- Faculty of Science, School of Biomolecular & Chemical Sciences, 2024
- Full Text:
- Date Issued: 2024-12
The inhibitory effects of cannabinoids from cannabis sativa on the enzymes dipeptidyl peptidase-IV, sucrase and maltase as a new therapeutic treatment for type 2 diabetes
- Authors: Viljoen, Zenobia
- Date: 2024-12
- Subjects: Diabetes -- Treatment , Cannabinoids -- Therapeutic use , Medical Marijuana -- therapeutic use
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/69516 , vital:77264
- Description: Type 2 diabetes is one of the most prevalent diseases worldwide. The treatments used to manage diabetes often have severe side effects and patients develop resistance to traditional treatment. The project aimed to test if phytocannabinoids from Cannabis sativa inhibited key enzymes involved in glycaemic homeostatic regulation, namely dipeptidyl peptidase 4 (DPP-4), sucrase, and maltase. This study investigated the inhibitory effects of 3 M-128 M cannabidiol (CBD), cannabinol (CBN), cannabigerol (CBG), and Δ9- tetrahydrocannabinol (THC). CD spectroscopy was used to investigate the changes in the secondary structure of DPP-4 with interacting inhibitors. The effect of 1.25, 2.5, and 5 mg/kg rat THC cannabis extract on the activity of DPP-4 in blood plasma and rat pancreatic tissue of the diabetic rat model and obese rat model. The effect of 1.25, 2.5, and 5 mg/kg rat THC cannabis extract on glucagon concentration in the blood plasma of the diabetic rat model and obese rat model was investigated. The carbohydrate digestive enzymes namely -amylase, -glucosidase and maltase are not inhibited by any of the cannabinoids. CBN had inhibitory effects on sucrase. CBN, CBG, and CBD are mixed inhibitors of DPP-4, thus they can inhibit DPP-4 competitively and uncompetitively depending on the concentration of the cannabinoid. THC was shown in kinetic and rat model studies to be a very weak inhibitor of DPP-4. CD spectroscopy showed that sitagliptin (FDA-approved drug and competitive inhibitor) and CBG mimic the denatured structure of DPP-4. CBD, CBN and THC mimic the free (active) form of DPP-4. A reduction in pancreatic DPP-4 activity was observed with 2.5 and 5 mg/kg rat THC (diabetic model). This study showed that diet plays a role in glycaemic dysregulation (obese rat model) and that insulin-resistant rats had four times higher glucagon levels compared to the lean control (diabetic model). 1.25 mg/kg rat THC reduced blood plasma DPP-4 activity and blood plasma glucagon. Cannabis sativa can be a feasible treatment to help manage type 2 diabetes by inhibiting DPP-4, especially medical strains of Cannabis sativa with high concentrations of CBD and CBG. , Thesis (MSc) -- Faculty of Science, School of Biomolecular & Chemical Sciences, 2024
- Full Text:
- Date Issued: 2024-12
- Authors: Viljoen, Zenobia
- Date: 2024-12
- Subjects: Diabetes -- Treatment , Cannabinoids -- Therapeutic use , Medical Marijuana -- therapeutic use
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/69516 , vital:77264
- Description: Type 2 diabetes is one of the most prevalent diseases worldwide. The treatments used to manage diabetes often have severe side effects and patients develop resistance to traditional treatment. The project aimed to test if phytocannabinoids from Cannabis sativa inhibited key enzymes involved in glycaemic homeostatic regulation, namely dipeptidyl peptidase 4 (DPP-4), sucrase, and maltase. This study investigated the inhibitory effects of 3 M-128 M cannabidiol (CBD), cannabinol (CBN), cannabigerol (CBG), and Δ9- tetrahydrocannabinol (THC). CD spectroscopy was used to investigate the changes in the secondary structure of DPP-4 with interacting inhibitors. The effect of 1.25, 2.5, and 5 mg/kg rat THC cannabis extract on the activity of DPP-4 in blood plasma and rat pancreatic tissue of the diabetic rat model and obese rat model. The effect of 1.25, 2.5, and 5 mg/kg rat THC cannabis extract on glucagon concentration in the blood plasma of the diabetic rat model and obese rat model was investigated. The carbohydrate digestive enzymes namely -amylase, -glucosidase and maltase are not inhibited by any of the cannabinoids. CBN had inhibitory effects on sucrase. CBN, CBG, and CBD are mixed inhibitors of DPP-4, thus they can inhibit DPP-4 competitively and uncompetitively depending on the concentration of the cannabinoid. THC was shown in kinetic and rat model studies to be a very weak inhibitor of DPP-4. CD spectroscopy showed that sitagliptin (FDA-approved drug and competitive inhibitor) and CBG mimic the denatured structure of DPP-4. CBD, CBN and THC mimic the free (active) form of DPP-4. A reduction in pancreatic DPP-4 activity was observed with 2.5 and 5 mg/kg rat THC (diabetic model). This study showed that diet plays a role in glycaemic dysregulation (obese rat model) and that insulin-resistant rats had four times higher glucagon levels compared to the lean control (diabetic model). 1.25 mg/kg rat THC reduced blood plasma DPP-4 activity and blood plasma glucagon. Cannabis sativa can be a feasible treatment to help manage type 2 diabetes by inhibiting DPP-4, especially medical strains of Cannabis sativa with high concentrations of CBD and CBG. , Thesis (MSc) -- Faculty of Science, School of Biomolecular & Chemical Sciences, 2024
- Full Text:
- Date Issued: 2024-12
The optimisation of Eucalyptus regeneration practices for improved survival, growth and uniformity in South African pulpwood plantations
- Authors: Hechter, Ullrich
- Date: 2024-12
- Subjects: Eucalyptus -- Regeneration -- South Africa , Forests and forestry -- Economic aspects , Forests and forestry
- Language: English
- Type: Doctoral theses , text
- Identifier: http://hdl.handle.net/10948/68862 , vital:77135
- Description: Commercial forestry plantations in South Africa play an important role in the economy of the country, contributing 1.2% towards the gross domestic product. Currently plantation forests occupy 1.1% (1.2 million hectares) of the South African land surface, of which 75 000 hectares are re-established each year. Eucalypts constitute 43% of planted area, of which 88% is grown for pulpwood. Achieving high tree survival (>90%) is important in terms of optimising rotation-end yield. The industry origin of a 90% survival benchmark is unclear, although company procedures incorporate this as the minimum threshold in terms of re-establishment success. Past research indicates that most mortality occurs within a narrow period post-establishment and is often associated with substandard re-establishment practices and/or a stressed micro-environment. An improved understanding is needed about the various mitigation measures needed to minimize mortality during eucalypt re-establishment. Before making decisions related to mortality mitigation measures, comprehensive data are required as to their commercial applicability as well as outcomes from multiple trials that accurately quantify any impacts on tree survival and financial return. The overall purpose of this dissertation was the optimisation of Eucalyptus re-establishment practices for improved survival, growth and uniformity in South African pulpwood plantations. To achieve this, five inter-linked objectives were determined. The first objective was to highlight the most important factors contributing to increased mortality in eucalypt plantations during re-establishment. This was achieved through conducting a literature review. Citations were ranked in terms of credibility, with the importance ratings (derived from the literature sources) applied to the different factors affecting survival and growth during eucalypt re-establishment. Of the various factors impacting early eucalypt mortality, water stress and planting stock quality were considered highly important. The manner and quality of site preparation (soil and slash), planting practices (planting depth included), timing of planting (during dry, hot periods), various post-planting operations (incorrect fertiliser placement or herbicide drift) and insect pests and diseases also contribute to mortality, but to a lesser extent. These factors cannot be considered in isolation due to the complex interactions that exist between them and determining the primary causes of mortality can be elusive, especially as their impacts tend to be additive by nature. The second objective was to link survival to silvicultural treatments, site-related physiographic factors and climatic variables in South Africa. This was achieved by conducting an integrated analysis of 43 Eucalyptus trials. Of the seven re-establishment practices considered, watering, planting depth and fertiliser application were significant, with plant size, pitting method, residue management and insecticide application were not significant. However, when environmental variables were included within the analyses, there were significant site x treatment interactions for planting depth, plant size, residue management and fertiliser application. This highlights the importance of taking site related factors into consideration when interpreting the causes of mortality. The third objective was to determine the interaction between planting density and mortality on Eucalyptus growth, uniformity and financial yield at rotation end in South Africa. This was carried out to verify whether planting at different densities may be used as a preventative (before planting) mitigation measure. One trial was used to answer four keys sub-objectives: 1) The impact of three planting densities (1 102, 1 500, 1 959 SPH) with no mortality on yield at rotation-end; 2) The impact of mortality (0%, 10%, 20%, 30%, 40%) on rotation-end yield; 3) The quantification of tree performance when planting at a higher density and accepting a certain degree of mortality; and 4) The financial impact of different planting densities and mortality on rotation-end profit. Higher planting densities resulted in smaller individual trees, but with an increase in stand level performance. At rotation-end, lower mortality (0% and 10%) had significantly higher volumes ha-1 than the higher mortality (30% and 40%). Planting at higher densities (1 722 and 1 959 SPH) and accepting a certain degree of mortality resulted in non-significant differences for volume at rotation-end compared to the fully stocked 1 500 SPH treatment. A higher SPH resulted in a higher yield, but with an increase in estimated establishment/tending and harvesting costs. In contrast, an increase in mortality and/or lower SPH (in the absence of mortality) resulted in more variable stand growth, together with an increase in estimated machine harvesting productivity and reduced costs. Irrespective of SPH, the higher the mortality the greater the loss of income, with the best profit within each treatment related to full stocking (0% mortality). Within the higher panting densities, the profit gained following low mortality (10 and 20%) was similar to that of no mortality (0%), indicating that higher mortality may be tolerated when planting at higher densities, confirming the 90% survival threshold the industry aims to achieve post-establishment. The fourth objective was to determine if silviculture intervention (blanking at 1, 2 and 3 months or coppicing and interplanting at 6 months) will result in acceptable eucalypt stocking, if mortality is higher than 10% (remedial mitigation measure). Data from a re-establishment trial were analysed to determine which of the mitigation measures performed best in terms of stocking and growth. Coppicing and interplanting with larger plants was not a viable option as a mitigation measure for mortality as most of the coppice shoots have died. This may have been a result of frost. Although high re-establishment costs may be incurred, disaster clearing to waste followed by replanting is an option if mortality is unacceptably high (as opposed to leaving the stand as is). The results of this objective confirm that blanking as the current Best Operating Practice is still appropriate in South African forestry (i.e., try to have survival >90% and blank as soon as possible to retain >90% of stems). Blanked plants do contribute to volume, but for this to occur, blanking should be carried out within 4 weeks after planting to gain maximum benefit. In addition, it highlights the importance of implementing remedial mitigation measures to achieve >90% survival so as to gain maximum benefit. Using the outcomes from objectives 1-4, the fifth objective focussed on the development of a decision support system (DSS) for implementation of mitigation measures to improve survival within commercial eucalypt pulpwood plantations in South Africa. Improved survival starts with the implementation of good re-establishment practices and good quality planting stock. Mitigation measures for poor survival can be implemented either prior to re-establishment (before mortality occurs) or post re-establishment (after mortality has occurred). If poor survival still occurs after the implementation of good silviculture practices and pre-re-establishment mitigation practices (planting at higher densities), one should consider the various options available in terms of post re-establishment mitigation practices (remedial practices) such as blanking or replanting if mortality is high. Overall, the outcomes from this dissertation provide benchmark data and derived information as to the necessity for various mortality mitigation options within the commercial forestry sector in South Africa. In addition, the DSS will assist with decision making in terms of implementing the best silviculture practices and mitigation measures for improved survival during eucalypt re-establishment in South African pulpwood plantations. , Thesis (PhD) -- Faculty of Science, School of Natural Resource Science & Management, 2024
- Full Text:
- Date Issued: 2024-12
- Authors: Hechter, Ullrich
- Date: 2024-12
- Subjects: Eucalyptus -- Regeneration -- South Africa , Forests and forestry -- Economic aspects , Forests and forestry
- Language: English
- Type: Doctoral theses , text
- Identifier: http://hdl.handle.net/10948/68862 , vital:77135
- Description: Commercial forestry plantations in South Africa play an important role in the economy of the country, contributing 1.2% towards the gross domestic product. Currently plantation forests occupy 1.1% (1.2 million hectares) of the South African land surface, of which 75 000 hectares are re-established each year. Eucalypts constitute 43% of planted area, of which 88% is grown for pulpwood. Achieving high tree survival (>90%) is important in terms of optimising rotation-end yield. The industry origin of a 90% survival benchmark is unclear, although company procedures incorporate this as the minimum threshold in terms of re-establishment success. Past research indicates that most mortality occurs within a narrow period post-establishment and is often associated with substandard re-establishment practices and/or a stressed micro-environment. An improved understanding is needed about the various mitigation measures needed to minimize mortality during eucalypt re-establishment. Before making decisions related to mortality mitigation measures, comprehensive data are required as to their commercial applicability as well as outcomes from multiple trials that accurately quantify any impacts on tree survival and financial return. The overall purpose of this dissertation was the optimisation of Eucalyptus re-establishment practices for improved survival, growth and uniformity in South African pulpwood plantations. To achieve this, five inter-linked objectives were determined. The first objective was to highlight the most important factors contributing to increased mortality in eucalypt plantations during re-establishment. This was achieved through conducting a literature review. Citations were ranked in terms of credibility, with the importance ratings (derived from the literature sources) applied to the different factors affecting survival and growth during eucalypt re-establishment. Of the various factors impacting early eucalypt mortality, water stress and planting stock quality were considered highly important. The manner and quality of site preparation (soil and slash), planting practices (planting depth included), timing of planting (during dry, hot periods), various post-planting operations (incorrect fertiliser placement or herbicide drift) and insect pests and diseases also contribute to mortality, but to a lesser extent. These factors cannot be considered in isolation due to the complex interactions that exist between them and determining the primary causes of mortality can be elusive, especially as their impacts tend to be additive by nature. The second objective was to link survival to silvicultural treatments, site-related physiographic factors and climatic variables in South Africa. This was achieved by conducting an integrated analysis of 43 Eucalyptus trials. Of the seven re-establishment practices considered, watering, planting depth and fertiliser application were significant, with plant size, pitting method, residue management and insecticide application were not significant. However, when environmental variables were included within the analyses, there were significant site x treatment interactions for planting depth, plant size, residue management and fertiliser application. This highlights the importance of taking site related factors into consideration when interpreting the causes of mortality. The third objective was to determine the interaction between planting density and mortality on Eucalyptus growth, uniformity and financial yield at rotation end in South Africa. This was carried out to verify whether planting at different densities may be used as a preventative (before planting) mitigation measure. One trial was used to answer four keys sub-objectives: 1) The impact of three planting densities (1 102, 1 500, 1 959 SPH) with no mortality on yield at rotation-end; 2) The impact of mortality (0%, 10%, 20%, 30%, 40%) on rotation-end yield; 3) The quantification of tree performance when planting at a higher density and accepting a certain degree of mortality; and 4) The financial impact of different planting densities and mortality on rotation-end profit. Higher planting densities resulted in smaller individual trees, but with an increase in stand level performance. At rotation-end, lower mortality (0% and 10%) had significantly higher volumes ha-1 than the higher mortality (30% and 40%). Planting at higher densities (1 722 and 1 959 SPH) and accepting a certain degree of mortality resulted in non-significant differences for volume at rotation-end compared to the fully stocked 1 500 SPH treatment. A higher SPH resulted in a higher yield, but with an increase in estimated establishment/tending and harvesting costs. In contrast, an increase in mortality and/or lower SPH (in the absence of mortality) resulted in more variable stand growth, together with an increase in estimated machine harvesting productivity and reduced costs. Irrespective of SPH, the higher the mortality the greater the loss of income, with the best profit within each treatment related to full stocking (0% mortality). Within the higher panting densities, the profit gained following low mortality (10 and 20%) was similar to that of no mortality (0%), indicating that higher mortality may be tolerated when planting at higher densities, confirming the 90% survival threshold the industry aims to achieve post-establishment. The fourth objective was to determine if silviculture intervention (blanking at 1, 2 and 3 months or coppicing and interplanting at 6 months) will result in acceptable eucalypt stocking, if mortality is higher than 10% (remedial mitigation measure). Data from a re-establishment trial were analysed to determine which of the mitigation measures performed best in terms of stocking and growth. Coppicing and interplanting with larger plants was not a viable option as a mitigation measure for mortality as most of the coppice shoots have died. This may have been a result of frost. Although high re-establishment costs may be incurred, disaster clearing to waste followed by replanting is an option if mortality is unacceptably high (as opposed to leaving the stand as is). The results of this objective confirm that blanking as the current Best Operating Practice is still appropriate in South African forestry (i.e., try to have survival >90% and blank as soon as possible to retain >90% of stems). Blanked plants do contribute to volume, but for this to occur, blanking should be carried out within 4 weeks after planting to gain maximum benefit. In addition, it highlights the importance of implementing remedial mitigation measures to achieve >90% survival so as to gain maximum benefit. Using the outcomes from objectives 1-4, the fifth objective focussed on the development of a decision support system (DSS) for implementation of mitigation measures to improve survival within commercial eucalypt pulpwood plantations in South Africa. Improved survival starts with the implementation of good re-establishment practices and good quality planting stock. Mitigation measures for poor survival can be implemented either prior to re-establishment (before mortality occurs) or post re-establishment (after mortality has occurred). If poor survival still occurs after the implementation of good silviculture practices and pre-re-establishment mitigation practices (planting at higher densities), one should consider the various options available in terms of post re-establishment mitigation practices (remedial practices) such as blanking or replanting if mortality is high. Overall, the outcomes from this dissertation provide benchmark data and derived information as to the necessity for various mortality mitigation options within the commercial forestry sector in South Africa. In addition, the DSS will assist with decision making in terms of implementing the best silviculture practices and mitigation measures for improved survival during eucalypt re-establishment in South African pulpwood plantations. , Thesis (PhD) -- Faculty of Science, School of Natural Resource Science & Management, 2024
- Full Text:
- Date Issued: 2024-12
Toughened wood plastic composites for low technology and advanced manufacturing applications
- Authors: Mabutho, Briswell
- Date: 2024-12
- Subjects: Plastic-impregnated wood , Polymeric composites
- Language: English
- Type: Doctoral theses , text
- Identifier: http://hdl.handle.net/10948/69360 , vital:77225
- Description: The utilization of wood plastic composites (WPCs) has increasingly emerged as an appealing alternative for products where traditional wood and conventional composites would typically be used. This is primarily due to their cost-effectiveness, mouldability, recyclability, renewability, and potential biodegradability. However, the incorporation of wood flour (WF) in thermoplastics to produce WPCs presents several challenges, two of which are addressed in the current study: the WF-thermoplastic matrix adhesion, and the resulting brittleness of the WPC. The hydrophilic nature of WF filler and the hydrophobic polypropylene matrix, which typically lead to poor mixing due to their differing surface energies. Consequently, the current research focuses on enhancing WF-matrix (i.e. polypropylene, PP) adhesion and dispersion through compatibilization using maleic anhydride grafted polypropylene (MAPP). Additionally, the brittleness of WPC, exacerbated by the WF content, is addressed through the incorporation of crumb rubber (CR), a process commonly referred to as "toughening" the WPC. Prior to the use of CR in WPCs, optimization of the CR amount and compatibility within the PP-matrix were conducted to establish a toughening system that would achieve the highest impact strength without significantly affecting the tensile strength. The CR was compatibilized by employing dynamic vulcanization of varying amounts of ethylene propylene diene monomer rubber (EPDM) in the CR/PP blends using both sulphur and dicumyl peroxide cure systems. The results indicated that the sulphur dynamic cure system exhibited higher crosslinking efficiency, as reflected by the highest impact strength. Furthermore, to enhance WPC processability and adhesion, WF alkalization was conducted following a central composite design to optimize treatment temperature, time, and alkali concentration. This optimization resulted in improved WPC processability and mechanical properties at mild alkalization conditions. Subsequently, the optimum CR/EPDM dynamic cure system was employed to toughen both untreated and alkalized WPCs, resulting in toughened WPCs with improved thermal stability, impact strength, and elongation at break, while the tensile strength was only slightly compromised. , Thesis (PhD) -- Faculty of Science, School of Biomolecular & Chemical Sciences, 2024
- Full Text:
- Date Issued: 2024-12
- Authors: Mabutho, Briswell
- Date: 2024-12
- Subjects: Plastic-impregnated wood , Polymeric composites
- Language: English
- Type: Doctoral theses , text
- Identifier: http://hdl.handle.net/10948/69360 , vital:77225
- Description: The utilization of wood plastic composites (WPCs) has increasingly emerged as an appealing alternative for products where traditional wood and conventional composites would typically be used. This is primarily due to their cost-effectiveness, mouldability, recyclability, renewability, and potential biodegradability. However, the incorporation of wood flour (WF) in thermoplastics to produce WPCs presents several challenges, two of which are addressed in the current study: the WF-thermoplastic matrix adhesion, and the resulting brittleness of the WPC. The hydrophilic nature of WF filler and the hydrophobic polypropylene matrix, which typically lead to poor mixing due to their differing surface energies. Consequently, the current research focuses on enhancing WF-matrix (i.e. polypropylene, PP) adhesion and dispersion through compatibilization using maleic anhydride grafted polypropylene (MAPP). Additionally, the brittleness of WPC, exacerbated by the WF content, is addressed through the incorporation of crumb rubber (CR), a process commonly referred to as "toughening" the WPC. Prior to the use of CR in WPCs, optimization of the CR amount and compatibility within the PP-matrix were conducted to establish a toughening system that would achieve the highest impact strength without significantly affecting the tensile strength. The CR was compatibilized by employing dynamic vulcanization of varying amounts of ethylene propylene diene monomer rubber (EPDM) in the CR/PP blends using both sulphur and dicumyl peroxide cure systems. The results indicated that the sulphur dynamic cure system exhibited higher crosslinking efficiency, as reflected by the highest impact strength. Furthermore, to enhance WPC processability and adhesion, WF alkalization was conducted following a central composite design to optimize treatment temperature, time, and alkali concentration. This optimization resulted in improved WPC processability and mechanical properties at mild alkalization conditions. Subsequently, the optimum CR/EPDM dynamic cure system was employed to toughen both untreated and alkalized WPCs, resulting in toughened WPCs with improved thermal stability, impact strength, and elongation at break, while the tensile strength was only slightly compromised. , Thesis (PhD) -- Faculty of Science, School of Biomolecular & Chemical Sciences, 2024
- Full Text:
- Date Issued: 2024-12
A comparison of implementation platforms for the visualisation of animal family trees
- Authors: Kanotangudza, Priviledge
- Date: 2024-04
- Subjects: Business intelligence -- Computer programs , Human-computer interaction , Computer science
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64105 , vital:73653
- Description: Genealogy is the study of family history. Family trees are used to show ancestry and visualise family history. Animal family trees are different from human family trees as animals have more offspring to represent in a family tree visualisation. Auctioneering organisations, such as Boere Korporasie Beperk (BKB), provide livestock auction catalogues containing pictures of the animal on sale, the animal’s family tree and its breeding and selection data. Modern-day farming has become data-driven and livestock farmers use various online devices and platforms to obtain information, such as real-time milk production, animal health monitoring and to manage farming operations. This study investigated and compared two Business Intelligence (BI) platforms namely Microsoft Power BI and Tableau (Salesforce) and the Python programming language used in the implementation of cattle family tree charts. Animal family tree visualisation requirements were identified from analysing data collected from 23 agriculture users and auction attendees who responded to an online questionnaire. The results of an online survey showed that agriculture users preferred an animal family tree that resembled a human one, which is not currently used in livestock auction catalogues. A conference paper was published based on the survey results. The Design Science Research Methodology (DSRM) was used to aid in creating animal family tree charts using Power BI, Tableau and Python. The author compared the visualisation tools against selected criteria, such as learnability, portability interoperability and security. Usability evaluations using eye tracking were conducted with agriculture users in a usability lab to compare the artefacts developed using Power BI and Python. Tableau was discarded during the implementation process as it did not produce the required family tree visualisation The Technology Acceptance Model (TAM) theory, which seeks to predict the acceptance and use of technology based on users' perception of its usefulness and ease of use, was used to guide the research study in evaluating the artefacts. According to TAM, the adoption of the proposed technology to solve the problem of a static animal family tree in livestock auction catalogues was dependent on the agriculture user’s beliefs. This was based upon that the technology would help them make better buying decisions at livestock auctions effortlessly. The other theory used in this study was the Task Technology Fit (TTF). This theory was used mainly to create the task list to be used in the usability test. The results showed that the author of this work and the agriculture users preferred the artefact produced by Power BI. The learnability and development time was shorter and the User Interface (UI) created was more intuitive. The findings of this study indicated that the present auction catalogue could be supplemented using interactive online animal family tree visualisations created using Power BI. This study recommended that livestock auctioneering organisations should, in addition to providing paper catalogues, provide farmers with an online platform to view the family trees of cattle on auction to enhance purchasing decisions. , Thesis (MCom) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2024
- Full Text:
- Date Issued: 2024-04
- Authors: Kanotangudza, Priviledge
- Date: 2024-04
- Subjects: Business intelligence -- Computer programs , Human-computer interaction , Computer science
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64105 , vital:73653
- Description: Genealogy is the study of family history. Family trees are used to show ancestry and visualise family history. Animal family trees are different from human family trees as animals have more offspring to represent in a family tree visualisation. Auctioneering organisations, such as Boere Korporasie Beperk (BKB), provide livestock auction catalogues containing pictures of the animal on sale, the animal’s family tree and its breeding and selection data. Modern-day farming has become data-driven and livestock farmers use various online devices and platforms to obtain information, such as real-time milk production, animal health monitoring and to manage farming operations. This study investigated and compared two Business Intelligence (BI) platforms namely Microsoft Power BI and Tableau (Salesforce) and the Python programming language used in the implementation of cattle family tree charts. Animal family tree visualisation requirements were identified from analysing data collected from 23 agriculture users and auction attendees who responded to an online questionnaire. The results of an online survey showed that agriculture users preferred an animal family tree that resembled a human one, which is not currently used in livestock auction catalogues. A conference paper was published based on the survey results. The Design Science Research Methodology (DSRM) was used to aid in creating animal family tree charts using Power BI, Tableau and Python. The author compared the visualisation tools against selected criteria, such as learnability, portability interoperability and security. Usability evaluations using eye tracking were conducted with agriculture users in a usability lab to compare the artefacts developed using Power BI and Python. Tableau was discarded during the implementation process as it did not produce the required family tree visualisation The Technology Acceptance Model (TAM) theory, which seeks to predict the acceptance and use of technology based on users' perception of its usefulness and ease of use, was used to guide the research study in evaluating the artefacts. According to TAM, the adoption of the proposed technology to solve the problem of a static animal family tree in livestock auction catalogues was dependent on the agriculture user’s beliefs. This was based upon that the technology would help them make better buying decisions at livestock auctions effortlessly. The other theory used in this study was the Task Technology Fit (TTF). This theory was used mainly to create the task list to be used in the usability test. The results showed that the author of this work and the agriculture users preferred the artefact produced by Power BI. The learnability and development time was shorter and the User Interface (UI) created was more intuitive. The findings of this study indicated that the present auction catalogue could be supplemented using interactive online animal family tree visualisations created using Power BI. This study recommended that livestock auctioneering organisations should, in addition to providing paper catalogues, provide farmers with an online platform to view the family trees of cattle on auction to enhance purchasing decisions. , Thesis (MCom) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2024
- Full Text:
- Date Issued: 2024-04
A methodology for modernising legacy web applications: subtitle if needed. If no subtitle follow instructions in manual
- Authors: Malgraff, Maxine
- Date: 2024-04
- Subjects: Management information systems , Information technology , Application software -- Development
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64148 , vital:73657
- Description: One problem faced in the Information Systems domain is that of poorly maintained, poorly documented, and/or unmanageable systems, known as Legacy Information Systems (LISs). As a result of the everchanging web development landscape, web applications have also become susceptible to the challenges faced in keeping up with technological advances, and older applications are starting to display the characteristics of becoming Legacy Web Applications (LWAs). As retaining business process support and meeting business requirements is often necessary, one method of recovering vital LWAs is to modernise them. System modernisation aims to recover business knowledge and provide an enhanced system that overcomes the problems plagued by LISs. When planning to modernise an LWA, guidance and support are essential to ensure that the modernisation exercise is performed efficiently and effectively. Modernisation methodologies can provide this required guidance and support as they provide models, tools and techniques that serve as guiding principles for the modernisation process. Although many modernisation methodologies exist, very few offer a comprehensive approach to modernisation that provides guidelines for each modernisation phase, tools to assist in the modernisation and techniques that can be used throughout. Existing methodologies also do not cater for cases that include both the LWA and migration to modernised web-specific environments. This research study aimed to investigate modernisation methodologies and identify which methodologies, or parts thereof, could be adapted for modernising LWAs. Existing methodologies were analysed and compared using the definition of a methodology, as well as other factors that improve the modernisation process. Modernisation case studies were reviewed to identify lessons learned from these studies so that these could be considered when planning an LWA modernisation. The ARTIST methodology was the most comprehensive modernisation methodology identified from those researched and was selected as the most appropriate methodology for modernising an LWA. ARTIST was modified to the mARTIST methodology to cater for web-based environments.mARTIST was used to modernise an existing LWA, called OldMax, at an automotive manufacturer, anonymously referred to as AutoCo, to determine its ability to support the modernisation of LWAs. Additional tools and evaluation methods were also investigated and used in place of those recommended by ARTIST, where deemed appropriate for the modernisation of OldMax. Limitations set by AutoCo on the hosting and technical environments for the modernised application also required ARTIST to be adapted to better suit the use case. The steps taken during this modernisation were documented and reported on to highlight the effectiveness of mARTIST and the tools used. The result of this modernisation was that the modernised web application, ModMax, was evaluated to determine the success of the modernisation. The modernisation of OldMax to ModMax, using the mARTIST methodology, was found to be successful based on the criteria set by the ARTIST methodology. Based on this, mARTIST can successfully be used for the modernisation of LWAs. To support future modernisations, an evaluation method for determining technical feasibility was developed for LWA, and alternate tools that could be used throughout modernisation exercises were recommended. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2024
- Full Text:
- Date Issued: 2024-04
- Authors: Malgraff, Maxine
- Date: 2024-04
- Subjects: Management information systems , Information technology , Application software -- Development
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64148 , vital:73657
- Description: One problem faced in the Information Systems domain is that of poorly maintained, poorly documented, and/or unmanageable systems, known as Legacy Information Systems (LISs). As a result of the everchanging web development landscape, web applications have also become susceptible to the challenges faced in keeping up with technological advances, and older applications are starting to display the characteristics of becoming Legacy Web Applications (LWAs). As retaining business process support and meeting business requirements is often necessary, one method of recovering vital LWAs is to modernise them. System modernisation aims to recover business knowledge and provide an enhanced system that overcomes the problems plagued by LISs. When planning to modernise an LWA, guidance and support are essential to ensure that the modernisation exercise is performed efficiently and effectively. Modernisation methodologies can provide this required guidance and support as they provide models, tools and techniques that serve as guiding principles for the modernisation process. Although many modernisation methodologies exist, very few offer a comprehensive approach to modernisation that provides guidelines for each modernisation phase, tools to assist in the modernisation and techniques that can be used throughout. Existing methodologies also do not cater for cases that include both the LWA and migration to modernised web-specific environments. This research study aimed to investigate modernisation methodologies and identify which methodologies, or parts thereof, could be adapted for modernising LWAs. Existing methodologies were analysed and compared using the definition of a methodology, as well as other factors that improve the modernisation process. Modernisation case studies were reviewed to identify lessons learned from these studies so that these could be considered when planning an LWA modernisation. The ARTIST methodology was the most comprehensive modernisation methodology identified from those researched and was selected as the most appropriate methodology for modernising an LWA. ARTIST was modified to the mARTIST methodology to cater for web-based environments.mARTIST was used to modernise an existing LWA, called OldMax, at an automotive manufacturer, anonymously referred to as AutoCo, to determine its ability to support the modernisation of LWAs. Additional tools and evaluation methods were also investigated and used in place of those recommended by ARTIST, where deemed appropriate for the modernisation of OldMax. Limitations set by AutoCo on the hosting and technical environments for the modernised application also required ARTIST to be adapted to better suit the use case. The steps taken during this modernisation were documented and reported on to highlight the effectiveness of mARTIST and the tools used. The result of this modernisation was that the modernised web application, ModMax, was evaluated to determine the success of the modernisation. The modernisation of OldMax to ModMax, using the mARTIST methodology, was found to be successful based on the criteria set by the ARTIST methodology. Based on this, mARTIST can successfully be used for the modernisation of LWAs. To support future modernisations, an evaluation method for determining technical feasibility was developed for LWA, and alternate tools that could be used throughout modernisation exercises were recommended. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2024
- Full Text:
- Date Issued: 2024-04
A model for measuring and predicting stress for software developers using vital signs and activities
- Authors: Hibbers, Ilze
- Date: 2024-04
- Subjects: Machine learning , Neural networks (Computer science) , Computer software developers
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/63799 , vital:73614
- Description: Occupational stress is a well-recognised issue that affects individuals in various professions and industries. Reducing occupational stress has multiple benefits, such as improving employee's health and performance. This study proposes a model to measure and predict occupational stress using data collected in a real IT office environment. Different data sources, such as questionnaires, application software (RescueTime) and Fitbit smartwatches were used for collecting heart rate (HR), facial emotions, computer interactions, and application usage. The results of the Demand Control Support and Effort and Reward questionnaires indicated that the participants experienced high social support and an average level of workload. Participants also reported their daily perceived stress and workload level using a 5- point score. The perceived stress of the participants was overall neutral. There was no correlation found between HR, interactions, fear, and meetings. K-means and Bernoulli algorithms were applied to the dataset and two well-separated clusters were formed. The centroids indicated that higher heart rates were grouped either with meetings or had a higher difference in the center point values for interactions. Silhouette scores and 5-fold-validation were used to measure the accuracy of the clusters. However, these clusters were unable to predict the daily reported stress levels. Calculations were done on the computer usage data to measure interaction speeds and time spent working, in meetings, or away from the computer. These calculations were used as input into a decision tree with the reported daily stress levels. The results of the tree helped to identify which patterns lead to stressful days. The results indicated that days with high time pressure led to more reported stress. A new, more general tree was developed, which was able to predict 82 per cent of the daily stress reported. The main discovery of the research was that stress does not have a straightforward connection with computer interactions, facial emotions, or meetings. High interactions sometimes lead to stress and other times do not. So, predicting stress involves finding patterns and how data from different data sources interact with each other. Future work will revolve around validating the model in more office environments around South Africa. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2024
- Full Text:
- Date Issued: 2024-04
A model for measuring and predicting stress for software developers using vital signs and activities
- Authors: Hibbers, Ilze
- Date: 2024-04
- Subjects: Machine learning , Neural networks (Computer science) , Computer software developers
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/63799 , vital:73614
- Description: Occupational stress is a well-recognised issue that affects individuals in various professions and industries. Reducing occupational stress has multiple benefits, such as improving employee's health and performance. This study proposes a model to measure and predict occupational stress using data collected in a real IT office environment. Different data sources, such as questionnaires, application software (RescueTime) and Fitbit smartwatches were used for collecting heart rate (HR), facial emotions, computer interactions, and application usage. The results of the Demand Control Support and Effort and Reward questionnaires indicated that the participants experienced high social support and an average level of workload. Participants also reported their daily perceived stress and workload level using a 5- point score. The perceived stress of the participants was overall neutral. There was no correlation found between HR, interactions, fear, and meetings. K-means and Bernoulli algorithms were applied to the dataset and two well-separated clusters were formed. The centroids indicated that higher heart rates were grouped either with meetings or had a higher difference in the center point values for interactions. Silhouette scores and 5-fold-validation were used to measure the accuracy of the clusters. However, these clusters were unable to predict the daily reported stress levels. Calculations were done on the computer usage data to measure interaction speeds and time spent working, in meetings, or away from the computer. These calculations were used as input into a decision tree with the reported daily stress levels. The results of the tree helped to identify which patterns lead to stressful days. The results indicated that days with high time pressure led to more reported stress. A new, more general tree was developed, which was able to predict 82 per cent of the daily stress reported. The main discovery of the research was that stress does not have a straightforward connection with computer interactions, facial emotions, or meetings. High interactions sometimes lead to stress and other times do not. So, predicting stress involves finding patterns and how data from different data sources interact with each other. Future work will revolve around validating the model in more office environments around South Africa. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2024
- Full Text:
- Date Issued: 2024-04
A process for integrated fitness and menstrual cycle data visualisations
- Authors: Taljaard, Isabelle
- Date: 2024-04
- Subjects: Human-computer interaction , Personal information management , Medical informatics -- Standards
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64379 , vital:73689
- Description: The increase in female participation in sport has led to an increase in research reporting on the relationship between fitness and menstrual cycle (F&M) data. Fitness variables such as VO2 max and heart rate are influenced by menstrual hormones and change with the different phases of a cycle. People frequently track both their F&M data, to understand their long-term activity and their body’s changes during the different cycle phases. Both these data sets are tracked and visualised separately to help people understand their data, however little work has been done to visualise the relationship between the two data sets. A process that guides the creation of an integrated F&M visualisation does not exist. This research aimed to develop and adopt a process that could be used to successfully guide the creation of an integrated F&M visualisation. The study followed the Design Science Research Methodology (DSRM) to create a primary and secondary artefact – the process and instantiation thereof. The DSRM was applied in iterative cycles where the process was developed, instantiations created and evaluated by participants. To develop the process, existing data processing and visualisation processes were reviewed from literature, to assess their successes and shortcomings. The review of existing processes revealed what steps, and factors related to those steps, would need to be considered. The process review highlighted the importance of five process steps: planning, collection, access, integration, and visualisation. Once the conceptual process was designed, it was adapted for the goal of creating an integrated F&M data visualisation. Prior to implementation, the process was first tested in a pilot study to ensure its validity before involving participants in data collection. After the process pilot study, the final implementation of the process took place and participants were recruited. In the first step of the process, the different fitness data types that are influenced by the menstrual cycle, and vice versa, were identified through a literature review. In the second step, devices to be used for data collection were evaluated and tested through exploratory testing and review of user manuals available online. The third and fourth steps, access, and integration were informed by further exploratory testing and review of relevant literature. The fifth step, data visualisation, was guided by relevant studies, Hick’s law, and the Schema Theory. Two Iterations of DSR were conducted in two phases. Phase 1 (P1) was the instantiation of the planning, collection, access, and processing steps. Participants wore smartwatches while going about their daily lives and working out and tracked their menstrual cycle to collect data. P1data was used to create several instantiations of the process. The second phase (P2) was the instantiation the visualisation step. The final visualisations, resulting from the instantiations, were evaluated by participants in P2. The review notes were used to improve both the process and the final visualisations. Both P1 and P2 were repeated (iterated) twice. The recommended process can be used by anyone who wants to create an integrated F&M visualisation and was designed to be modular so that users could choose to follow the whole process or only specific steps. The findings of this research can provide guidance to users, developers and smartwatch manufacturers of what people’s preferences are for these integrated visualisations. It also provides guidance for those who wish to create their own visualisations without needing prior programming experience or knowledge, since easy to use, online visualisation tools are recommended. The process instantiations will assist people, especially women, to better understand their menstrual cycle and how it affects their physical well-being. , Thesis (MCom) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2024
- Full Text:
- Date Issued: 2024-04
- Authors: Taljaard, Isabelle
- Date: 2024-04
- Subjects: Human-computer interaction , Personal information management , Medical informatics -- Standards
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64379 , vital:73689
- Description: The increase in female participation in sport has led to an increase in research reporting on the relationship between fitness and menstrual cycle (F&M) data. Fitness variables such as VO2 max and heart rate are influenced by menstrual hormones and change with the different phases of a cycle. People frequently track both their F&M data, to understand their long-term activity and their body’s changes during the different cycle phases. Both these data sets are tracked and visualised separately to help people understand their data, however little work has been done to visualise the relationship between the two data sets. A process that guides the creation of an integrated F&M visualisation does not exist. This research aimed to develop and adopt a process that could be used to successfully guide the creation of an integrated F&M visualisation. The study followed the Design Science Research Methodology (DSRM) to create a primary and secondary artefact – the process and instantiation thereof. The DSRM was applied in iterative cycles where the process was developed, instantiations created and evaluated by participants. To develop the process, existing data processing and visualisation processes were reviewed from literature, to assess their successes and shortcomings. The review of existing processes revealed what steps, and factors related to those steps, would need to be considered. The process review highlighted the importance of five process steps: planning, collection, access, integration, and visualisation. Once the conceptual process was designed, it was adapted for the goal of creating an integrated F&M data visualisation. Prior to implementation, the process was first tested in a pilot study to ensure its validity before involving participants in data collection. After the process pilot study, the final implementation of the process took place and participants were recruited. In the first step of the process, the different fitness data types that are influenced by the menstrual cycle, and vice versa, were identified through a literature review. In the second step, devices to be used for data collection were evaluated and tested through exploratory testing and review of user manuals available online. The third and fourth steps, access, and integration were informed by further exploratory testing and review of relevant literature. The fifth step, data visualisation, was guided by relevant studies, Hick’s law, and the Schema Theory. Two Iterations of DSR were conducted in two phases. Phase 1 (P1) was the instantiation of the planning, collection, access, and processing steps. Participants wore smartwatches while going about their daily lives and working out and tracked their menstrual cycle to collect data. P1data was used to create several instantiations of the process. The second phase (P2) was the instantiation the visualisation step. The final visualisations, resulting from the instantiations, were evaluated by participants in P2. The review notes were used to improve both the process and the final visualisations. Both P1 and P2 were repeated (iterated) twice. The recommended process can be used by anyone who wants to create an integrated F&M visualisation and was designed to be modular so that users could choose to follow the whole process or only specific steps. The findings of this research can provide guidance to users, developers and smartwatch manufacturers of what people’s preferences are for these integrated visualisations. It also provides guidance for those who wish to create their own visualisations without needing prior programming experience or knowledge, since easy to use, online visualisation tools are recommended. The process instantiations will assist people, especially women, to better understand their menstrual cycle and how it affects their physical well-being. , Thesis (MCom) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2024
- Full Text:
- Date Issued: 2024-04
A toolkit for successful workplace learning analytics at software vendors
- Authors: Whale, Alyssa Morgan
- Date: 2024-04
- Subjects: Computer-assisted instruction , Intelligent tutoring systems , Information visualisation
- Language: English
- Type: Doctoral theses , text
- Identifier: http://hdl.handle.net/10948/64448 , vital:73713
- Description: Software vendors commonly provide digital software training to their stakeholders and therefore are faced with the problem of an influx of data collected from these training/learning initiatives. Every second of every day, data is being collected based on online learning activities and learner behaviour. Thus, online platforms are struggling to cope with the volumes of data that are collected, and companies are finding it difficult to analyse and manage this data in a way that can be beneficial to all stakeholders. The majority of studies investigating learning analytics have been conducted in educational settings. This research aimed to develop and evaluate a toolkit that can be used for successful Workplace Learning Analytics (WLA) at software vendors. The study followed the Design Science Research (DSR) methodology, which was applied in iterative cycles where various components of the toolkit were designed, developed, and evaluated by participants. The real-world-context was a software vendor, ERPCo, which has been struggling to implement WLA successfully with their current Learning Experience Platform (LXP), as well as with their previous platform. Qualitative data was collected using document analysis of key company documents and Focus Group Discussions (FGDs) with employees from ERPCo to explore and confirm different topics and themes. These methods were used to iteratively analyse the As-Is and To-Be situations at ERPCo and to develop and evaluate the proposed WLA Toolkit. The method used to analyse the collected data from the FGDs was the Qualitative Content Analysis (QCA) method. To develop the first component of the toolkit, the Organisation component, the organisational success factors that influence the success of WLA were identified using a Systematic Literature Review (SLR). These factors were discussed and validated in two exploratory FGDs held with employees from ERPCo, one with operational stakeholders and the other with strategic decision makers. The DeLone and McLean Information Systems (D&M IS) Success Model was used to undergird the research as a theory to guide the understanding of the factors influencing the success of WLA. Many of the factors identified in theory were found to be prevalent in the real-world-context, with some additional ones being identified in the FGDs. The most frequent challenges highlighted by participants were related to visibility; readily available high-quality data; flexibility of reporting; complexity of reporting; and effective decision making and insights obtained. Many of these related to the concept of usability issues for both the system and the information, which is specifically related to System Quality or Information Quality from the D&M IS Success Model. The second and third components of the toolkit are the Technology and Applications; and Information components respectively. Therefore, architecture and data management challenges and requirements for these components were analysed. An appropriate WLA architecture was selected and then further customised for use at ERPCo. A third FGD was conducted with employees who had more technical roles in ERPCo. The purpose of this FGD was to provide input on the architecture, technologies and data management challenges and requirements. In the Technology and Applications component of the WLA Toolkit, factors influencing WLA success related to applications and visualisations were considered. An instantiation of this component was demonstrated in the fourth FGD, where learning data from the LXP at ERPCo was collected and a dashboard incorporating recommended visualisation techniques was developed as a proof of concept. In this FGD participants gave feedback on both the dashboard and the toolkit. The artefact of this research is the WLA Toolkit that can be used by practitioners to guide the planning and implementation of WLA in large organisations that use LXP and WLA platforms. Researchers can use the WLA Toolkit to gain a deeper understanding of the required components and factors for successful WLA in software vendors. The research also contributes to the D&M IS Success Model theory in the information economy. In support of this PhD dissertation, the following paper has been published: Whale, A. & Scholtz, B. 2022. A Theoretical Classification of Organizational Success Factors for Workplace Learning Analytics. NEXTCOMP 2022. Mauritius. A draft manuscript for a journal paper was in progress at the time of submitting this thesis. , Thesis (PhD) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics , 2024
- Full Text:
- Date Issued: 2024-04
- Authors: Whale, Alyssa Morgan
- Date: 2024-04
- Subjects: Computer-assisted instruction , Intelligent tutoring systems , Information visualisation
- Language: English
- Type: Doctoral theses , text
- Identifier: http://hdl.handle.net/10948/64448 , vital:73713
- Description: Software vendors commonly provide digital software training to their stakeholders and therefore are faced with the problem of an influx of data collected from these training/learning initiatives. Every second of every day, data is being collected based on online learning activities and learner behaviour. Thus, online platforms are struggling to cope with the volumes of data that are collected, and companies are finding it difficult to analyse and manage this data in a way that can be beneficial to all stakeholders. The majority of studies investigating learning analytics have been conducted in educational settings. This research aimed to develop and evaluate a toolkit that can be used for successful Workplace Learning Analytics (WLA) at software vendors. The study followed the Design Science Research (DSR) methodology, which was applied in iterative cycles where various components of the toolkit were designed, developed, and evaluated by participants. The real-world-context was a software vendor, ERPCo, which has been struggling to implement WLA successfully with their current Learning Experience Platform (LXP), as well as with their previous platform. Qualitative data was collected using document analysis of key company documents and Focus Group Discussions (FGDs) with employees from ERPCo to explore and confirm different topics and themes. These methods were used to iteratively analyse the As-Is and To-Be situations at ERPCo and to develop and evaluate the proposed WLA Toolkit. The method used to analyse the collected data from the FGDs was the Qualitative Content Analysis (QCA) method. To develop the first component of the toolkit, the Organisation component, the organisational success factors that influence the success of WLA were identified using a Systematic Literature Review (SLR). These factors were discussed and validated in two exploratory FGDs held with employees from ERPCo, one with operational stakeholders and the other with strategic decision makers. The DeLone and McLean Information Systems (D&M IS) Success Model was used to undergird the research as a theory to guide the understanding of the factors influencing the success of WLA. Many of the factors identified in theory were found to be prevalent in the real-world-context, with some additional ones being identified in the FGDs. The most frequent challenges highlighted by participants were related to visibility; readily available high-quality data; flexibility of reporting; complexity of reporting; and effective decision making and insights obtained. Many of these related to the concept of usability issues for both the system and the information, which is specifically related to System Quality or Information Quality from the D&M IS Success Model. The second and third components of the toolkit are the Technology and Applications; and Information components respectively. Therefore, architecture and data management challenges and requirements for these components were analysed. An appropriate WLA architecture was selected and then further customised for use at ERPCo. A third FGD was conducted with employees who had more technical roles in ERPCo. The purpose of this FGD was to provide input on the architecture, technologies and data management challenges and requirements. In the Technology and Applications component of the WLA Toolkit, factors influencing WLA success related to applications and visualisations were considered. An instantiation of this component was demonstrated in the fourth FGD, where learning data from the LXP at ERPCo was collected and a dashboard incorporating recommended visualisation techniques was developed as a proof of concept. In this FGD participants gave feedback on both the dashboard and the toolkit. The artefact of this research is the WLA Toolkit that can be used by practitioners to guide the planning and implementation of WLA in large organisations that use LXP and WLA platforms. Researchers can use the WLA Toolkit to gain a deeper understanding of the required components and factors for successful WLA in software vendors. The research also contributes to the D&M IS Success Model theory in the information economy. In support of this PhD dissertation, the following paper has been published: Whale, A. & Scholtz, B. 2022. A Theoretical Classification of Organizational Success Factors for Workplace Learning Analytics. NEXTCOMP 2022. Mauritius. A draft manuscript for a journal paper was in progress at the time of submitting this thesis. , Thesis (PhD) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics , 2024
- Full Text:
- Date Issued: 2024-04
An in vitro evaluation of the anti-breast cancer activity of Nigella sativa extracts and its bioactive compound in combination with curcumin
- Authors: Botha, Susanna Gertruida
- Date: 2024-04
- Subjects: Herbs -- Therapeutic use , Radiation-protective agents , Breast -- Cancer -- Treatment
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/63639 , vital:73571
- Description: Breast cancer constitutes 23% of all cancers in South African females. Curcumin and Nigella sativa have anti-cancer, anti-metastatic and antioxidant-properties and may be effective against breast cancer. This study focused on the effect of N. sativa extracts or thymoquinone and curcumin, individually and in combination, on breast cancer cells. An MTT assay showed that curcumin reduced cell viability by 50% (IC50) at 18 ± 2.63 μg/mL and thymoquinone (TQ) at 5 ± 0.95 μg/mL against the MDA-MB-231 cells. The IC50 values for curcumin and TQ were 35 ± 6.98 μg/mL and 4 ± 0.96 μg/mL against the MCF-7 cells, respectively. The IC50 value for the NSBE was determined to be 350 ± 55 μg/mL. The IC50 value of NSAE did not fall within the selected concentration range. Synergism was noted for combinations of NSBE with curcumin, and combinations of TQ with curcumin, against both MCF-7 and MDA-MB-231 cells. Two synergistic combinations per treatment per cell line, as determined by the combination index analysis, were chosen for further investigation. The combinations and individual treatments tested against the MCF-10A cells, were not significant, except for NSBE80:CURC20 combination. Curcumin had the most significant anti-oxidant activity; however, no link was noted between the anti-oxidant activity and the cytotoxicity of the combinations. The combination treatments induced apoptosis more effectively than the individual treatments. Caspase-3 dependent apoptosis was noted for NSBE10:CURC90 and TQ80:CURC20 combinations against the MDA-MB-231 cells, and the TQ60:CURC40 combination against the MCF-7 cells. The individual and combined treatments effectively reduced MDA-MB-231 cell adhesion to fibronectin, but not all reduced the cell adhesion to laminin. Based on these results, the combinations of curcumin with TQ or NSBE, have promising anticancer benefits against breast cancer. , Thesis (MSc) -- Faculty of Science, School of Biomolecular & Chemical Sciences, 2024
- Full Text:
- Date Issued: 2024-04
- Authors: Botha, Susanna Gertruida
- Date: 2024-04
- Subjects: Herbs -- Therapeutic use , Radiation-protective agents , Breast -- Cancer -- Treatment
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/63639 , vital:73571
- Description: Breast cancer constitutes 23% of all cancers in South African females. Curcumin and Nigella sativa have anti-cancer, anti-metastatic and antioxidant-properties and may be effective against breast cancer. This study focused on the effect of N. sativa extracts or thymoquinone and curcumin, individually and in combination, on breast cancer cells. An MTT assay showed that curcumin reduced cell viability by 50% (IC50) at 18 ± 2.63 μg/mL and thymoquinone (TQ) at 5 ± 0.95 μg/mL against the MDA-MB-231 cells. The IC50 values for curcumin and TQ were 35 ± 6.98 μg/mL and 4 ± 0.96 μg/mL against the MCF-7 cells, respectively. The IC50 value for the NSBE was determined to be 350 ± 55 μg/mL. The IC50 value of NSAE did not fall within the selected concentration range. Synergism was noted for combinations of NSBE with curcumin, and combinations of TQ with curcumin, against both MCF-7 and MDA-MB-231 cells. Two synergistic combinations per treatment per cell line, as determined by the combination index analysis, were chosen for further investigation. The combinations and individual treatments tested against the MCF-10A cells, were not significant, except for NSBE80:CURC20 combination. Curcumin had the most significant anti-oxidant activity; however, no link was noted between the anti-oxidant activity and the cytotoxicity of the combinations. The combination treatments induced apoptosis more effectively than the individual treatments. Caspase-3 dependent apoptosis was noted for NSBE10:CURC90 and TQ80:CURC20 combinations against the MDA-MB-231 cells, and the TQ60:CURC40 combination against the MCF-7 cells. The individual and combined treatments effectively reduced MDA-MB-231 cell adhesion to fibronectin, but not all reduced the cell adhesion to laminin. Based on these results, the combinations of curcumin with TQ or NSBE, have promising anticancer benefits against breast cancer. , Thesis (MSc) -- Faculty of Science, School of Biomolecular & Chemical Sciences, 2024
- Full Text:
- Date Issued: 2024-04
Augmenting encoder-decoder networks for first-order logic formula parsing using attention pointer mechanisms
- Authors: Tissink, Kade
- Date: 2024-04
- Subjects: Translators (Computer programs) , Computational linguistics , Computer science
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64390 , vital:73692
- Description: Semantic parsing is the task of extracting a structured machine-interpretable representation from natural language utterance. This representation can be used for various applications such as question answering, information extraction, and dialogue systems. However, semantic parsing is a challenging problem that requires dealing with the ambiguity, variability, and complexity of natural language. This dissertation investigates neural parsing of natural language (NL) sentences to first-order logic (FOL) formulas. FOL is a widely used formal language for expressing logical statements and reasoning. FOL formulas can capture the meaning and structure of natural language sentences in a precise and unambiguous way. The problem is initially approached as a sequence-to-sequence mapping task using both LSTM-based and transformer encoder-decoder architectures for character-, subword-, and wordlevel text tokenisation. These models are trained on NL-FOL datasets using supervised learning and evaluated on various metrics such as exact match accuracy, syntactic validity, formula structure accuracy, and predicate/constant similarity. A novel augmented model is then introduced that decomposes the task of neural FOL parsing into four inter-dependent subtasks: template decoding, predicate and constant recognition, predicate set pointing, and object set pointing. The components for the four subtasks are jointly trained using multi-task learning and evaluated using the same metrics from the sequence-tosequence models. The results indicate improved performance over the sequence-to-sequence models and the modular design allows for more interpretability and flexibility. Additionally, to compensate for the scarcity of open-source, labelled NL-FOL datasets, a new benchmark is constructed from publicly accessible data. The data consists of NL sentences paired with corresponding FOL formulas in a standardised notation. The data is split into training, validation, and test sets. The main contributions of this dissertation are: an in-depth literature review covering decades of research presented with a consistent notation, the formation of a complex NL-FOL benchmark that includes algorithmically generated and human-annotated FOL formulas, proposal of a novel transformer encoder-decoder architecture that is shown to successfully train at significant depths, evaluation of twenty sequence-to-sequence models on the task of neural FOL parsing for different text representations and encoder-decoder architectures, the proposal of a novel augmented FOL parsing architecture, and an in-depth analysis of the strengths and weaknesses of these models. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics , 2024
- Full Text:
- Date Issued: 2024-04
- Authors: Tissink, Kade
- Date: 2024-04
- Subjects: Translators (Computer programs) , Computational linguistics , Computer science
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64390 , vital:73692
- Description: Semantic parsing is the task of extracting a structured machine-interpretable representation from natural language utterance. This representation can be used for various applications such as question answering, information extraction, and dialogue systems. However, semantic parsing is a challenging problem that requires dealing with the ambiguity, variability, and complexity of natural language. This dissertation investigates neural parsing of natural language (NL) sentences to first-order logic (FOL) formulas. FOL is a widely used formal language for expressing logical statements and reasoning. FOL formulas can capture the meaning and structure of natural language sentences in a precise and unambiguous way. The problem is initially approached as a sequence-to-sequence mapping task using both LSTM-based and transformer encoder-decoder architectures for character-, subword-, and wordlevel text tokenisation. These models are trained on NL-FOL datasets using supervised learning and evaluated on various metrics such as exact match accuracy, syntactic validity, formula structure accuracy, and predicate/constant similarity. A novel augmented model is then introduced that decomposes the task of neural FOL parsing into four inter-dependent subtasks: template decoding, predicate and constant recognition, predicate set pointing, and object set pointing. The components for the four subtasks are jointly trained using multi-task learning and evaluated using the same metrics from the sequence-tosequence models. The results indicate improved performance over the sequence-to-sequence models and the modular design allows for more interpretability and flexibility. Additionally, to compensate for the scarcity of open-source, labelled NL-FOL datasets, a new benchmark is constructed from publicly accessible data. The data consists of NL sentences paired with corresponding FOL formulas in a standardised notation. The data is split into training, validation, and test sets. The main contributions of this dissertation are: an in-depth literature review covering decades of research presented with a consistent notation, the formation of a complex NL-FOL benchmark that includes algorithmically generated and human-annotated FOL formulas, proposal of a novel transformer encoder-decoder architecture that is shown to successfully train at significant depths, evaluation of twenty sequence-to-sequence models on the task of neural FOL parsing for different text representations and encoder-decoder architectures, the proposal of a novel augmented FOL parsing architecture, and an in-depth analysis of the strengths and weaknesses of these models. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics , 2024
- Full Text:
- Date Issued: 2024-04
Augmenting the Moore-Penrose generalised Inverse to train neural networks
- Authors: Fang, Bobby
- Date: 2024-04
- Subjects: Neural networks (Computer science) , Machine learning , Mathematical optimization -- Computer programs
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/63755 , vital:73595
- Description: An Extreme Learning Machine (ELM) is a non-iterative and fast feedforward neural network training algorithm which uses the Moore-Penrose generalised inverse of a matrix to compute the weights of the output layer of the neural network, using a random initialisation for the hidden layer. While ELM has been used to train feedforward neural networks, the effectiveness of the MP generalised to train recurrent neural networks is yet to be investigated. The primary aim of this research was to investigate how biases in the output layer and the MP generalised inverse can be used to train recurrent neural networks. To accomplish this, the Bias Augmented ELM (BA-ELM), which concatenated the hidden layer output matrix with a ones-column vector to simulate the biases in the output layer, was proposed. A variety of datasets generated from optimisation test functions, as well as using real-world regression and classification datasets, were used to validate BA-ELM. The results showed in specific circumstances that BA-ELM was able to perform better than ELM. Following this, Recurrent ELM (R-ELM) was proposed which uses a recurrent hidden layer instead of a feedforward hidden layer. Recurrent neural networks also rely on having functional feedback connections in the recurrent layer. A hybrid training algorithm, Recurrent Hybrid ELM (R-HELM), was proposed, which uses a gradient-based algorithm to optimise the recurrent layer and the MP generalised inverse to compute the output weights. The evaluation of R-ELM and R-HELM algorithms were carried out using three different recurrent architectures on two recurrent tasks derived from the Susceptible- Exposed-Infected-Removed (SEIR) epidemiology model. Various training hyperparameters were evaluated through hyperparameter investigations to investigate their effectiveness on the hybrid training algorithm. With optimal hyperparameters, the hybrid training algorithm was able to achieve better performance than the conventional gradient-based algorithm. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2024
- Full Text:
- Date Issued: 2024-04
- Authors: Fang, Bobby
- Date: 2024-04
- Subjects: Neural networks (Computer science) , Machine learning , Mathematical optimization -- Computer programs
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/63755 , vital:73595
- Description: An Extreme Learning Machine (ELM) is a non-iterative and fast feedforward neural network training algorithm which uses the Moore-Penrose generalised inverse of a matrix to compute the weights of the output layer of the neural network, using a random initialisation for the hidden layer. While ELM has been used to train feedforward neural networks, the effectiveness of the MP generalised to train recurrent neural networks is yet to be investigated. The primary aim of this research was to investigate how biases in the output layer and the MP generalised inverse can be used to train recurrent neural networks. To accomplish this, the Bias Augmented ELM (BA-ELM), which concatenated the hidden layer output matrix with a ones-column vector to simulate the biases in the output layer, was proposed. A variety of datasets generated from optimisation test functions, as well as using real-world regression and classification datasets, were used to validate BA-ELM. The results showed in specific circumstances that BA-ELM was able to perform better than ELM. Following this, Recurrent ELM (R-ELM) was proposed which uses a recurrent hidden layer instead of a feedforward hidden layer. Recurrent neural networks also rely on having functional feedback connections in the recurrent layer. A hybrid training algorithm, Recurrent Hybrid ELM (R-HELM), was proposed, which uses a gradient-based algorithm to optimise the recurrent layer and the MP generalised inverse to compute the output weights. The evaluation of R-ELM and R-HELM algorithms were carried out using three different recurrent architectures on two recurrent tasks derived from the Susceptible- Exposed-Infected-Removed (SEIR) epidemiology model. Various training hyperparameters were evaluated through hyperparameter investigations to investigate their effectiveness on the hybrid training algorithm. With optimal hyperparameters, the hybrid training algorithm was able to achieve better performance than the conventional gradient-based algorithm. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2024
- Full Text:
- Date Issued: 2024-04
Comparative study of the effect of iloprost on neuroinflammatory changes in c8-b4 microglial cells and murine model of trypanosomiasis
- Authors: Jacobs, Ashleigh
- Date: 2024-04
- Subjects: Trypanosomiasis -- South Africa , DNA -- Methylation -- Research -- Methodology , Central nervous system -- Diseases , Nervous system -- Degeneration
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64077 , vital:73651
- Description: Neurodegenerative conditions significantly impact well-being and quality of life in individuals with major symptoms including mood disorders, cognitive decline, and psychiatric disturbances, often resulting from neuroinflammation triggered by immune responses to bacterial or parasitic infections such as gram-negative bacteria or Human African Trypanosomiasis. Microglia play a crucial role in both neurotoxicity and cellular processes involved in restoring the neural health. Exploring the therapeutic potential of prostacyclin and its analogues in regulating microglia responses to inflammatory insult and treating Trypanosoma brucei (T.b) infection remains an unexplored area. The aim of this study was to assess the potential neuroprotective effects of Iloprost through comparative analysis of neuroinflammatory responses in both microglial cells exposed to lipopolysaccharide (LPS) and mouse brains infected with T.b brucei. In phase I of this study both resting and LPS treated C8-B4 microglial cells were exposed to varying concentrations of Iloprost. The effects of Iloprost on LPS-induced inflammation were analysed using immunofluorescence to detect microglial activation and differentiate between pro and anti-inflammatory phenotypes. Furthermore, pro and anti-inflammatory cytokine secretion was determined using an ELISA, in addition gene expression analysis was carried out using quantitative polymerase chain reaction (qPCR). Also, DNA methylation status of C8-B4 cells exposed to LPS challenge alone or in combination with various concentrations of Iloprost were determined using bisulfite sequencing technique followed by qPCR. In phase II of the study, a total of twenty-four Albino Swiss male mice (8-10 weeks old) were divided into four treatment groups with 6 mice in each group. All treatment groups except the non-infected control were inoculated with the T.b brucei parasite. One group received a single intraperitoneal injection of Diminazene aceturate (4 mg kg-1) while the remaining group received repeated intraperitoneal injections of Iloprost (200 μg kg-1). On day ten of the study, mouse brains were removed on ice using forceps. The hippocampal tissues were dissected out and processed for quantification of gene expression changes in pro and anti-inflammatory cytokines. Overall, the findings of this study indicate that LPS-induced pro-inflammatory cytokine, TNF-α and IL-1β, secretion and gene expression is down-regulated in C8-B4 microglial cells treated with Iloprost. Furthermore, there was a significant up-regulation in the expression of anti-inflammatory genes, particularly ARG-1, CD206, BDNF and CREB in response to Iloprost treatment following LPS-induced inflammation. This study is also the first to confirm M2 microglial polarization with Iloprost treatment in both resting and LPS treated cells. However, hypermethylation at CREB and BDNF promoter regions was observed 24 hours after Iloprost treatment. Additionally, Iloprost reversed hypomethylation at the BDNF promoter region that had been induced by LPS treatment. The rodent model also indicated a downregulation in the pro-inflammatory cytokine, IL-1β, expression and upregulation of BDNF transcription in T.b brucei infected mice treated with repeated doses of Iloprost. In conclusion, determining the immunomodulatory roles of Iloprost in both in vitro and in vivo models of neuroinflammation could assist in the development of alternative therapy for neurodegenerative disease. , Thesis (MSc) -- Faculty of Science, School of Biomolecular & Chemical Sciences, 2024
- Full Text:
- Date Issued: 2024-04
- Authors: Jacobs, Ashleigh
- Date: 2024-04
- Subjects: Trypanosomiasis -- South Africa , DNA -- Methylation -- Research -- Methodology , Central nervous system -- Diseases , Nervous system -- Degeneration
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64077 , vital:73651
- Description: Neurodegenerative conditions significantly impact well-being and quality of life in individuals with major symptoms including mood disorders, cognitive decline, and psychiatric disturbances, often resulting from neuroinflammation triggered by immune responses to bacterial or parasitic infections such as gram-negative bacteria or Human African Trypanosomiasis. Microglia play a crucial role in both neurotoxicity and cellular processes involved in restoring the neural health. Exploring the therapeutic potential of prostacyclin and its analogues in regulating microglia responses to inflammatory insult and treating Trypanosoma brucei (T.b) infection remains an unexplored area. The aim of this study was to assess the potential neuroprotective effects of Iloprost through comparative analysis of neuroinflammatory responses in both microglial cells exposed to lipopolysaccharide (LPS) and mouse brains infected with T.b brucei. In phase I of this study both resting and LPS treated C8-B4 microglial cells were exposed to varying concentrations of Iloprost. The effects of Iloprost on LPS-induced inflammation were analysed using immunofluorescence to detect microglial activation and differentiate between pro and anti-inflammatory phenotypes. Furthermore, pro and anti-inflammatory cytokine secretion was determined using an ELISA, in addition gene expression analysis was carried out using quantitative polymerase chain reaction (qPCR). Also, DNA methylation status of C8-B4 cells exposed to LPS challenge alone or in combination with various concentrations of Iloprost were determined using bisulfite sequencing technique followed by qPCR. In phase II of the study, a total of twenty-four Albino Swiss male mice (8-10 weeks old) were divided into four treatment groups with 6 mice in each group. All treatment groups except the non-infected control were inoculated with the T.b brucei parasite. One group received a single intraperitoneal injection of Diminazene aceturate (4 mg kg-1) while the remaining group received repeated intraperitoneal injections of Iloprost (200 μg kg-1). On day ten of the study, mouse brains were removed on ice using forceps. The hippocampal tissues were dissected out and processed for quantification of gene expression changes in pro and anti-inflammatory cytokines. Overall, the findings of this study indicate that LPS-induced pro-inflammatory cytokine, TNF-α and IL-1β, secretion and gene expression is down-regulated in C8-B4 microglial cells treated with Iloprost. Furthermore, there was a significant up-regulation in the expression of anti-inflammatory genes, particularly ARG-1, CD206, BDNF and CREB in response to Iloprost treatment following LPS-induced inflammation. This study is also the first to confirm M2 microglial polarization with Iloprost treatment in both resting and LPS treated cells. However, hypermethylation at CREB and BDNF promoter regions was observed 24 hours after Iloprost treatment. Additionally, Iloprost reversed hypomethylation at the BDNF promoter region that had been induced by LPS treatment. The rodent model also indicated a downregulation in the pro-inflammatory cytokine, IL-1β, expression and upregulation of BDNF transcription in T.b brucei infected mice treated with repeated doses of Iloprost. In conclusion, determining the immunomodulatory roles of Iloprost in both in vitro and in vivo models of neuroinflammation could assist in the development of alternative therapy for neurodegenerative disease. , Thesis (MSc) -- Faculty of Science, School of Biomolecular & Chemical Sciences, 2024
- Full Text:
- Date Issued: 2024-04
Comparing stable isotope ratios and metal concentrations between components of the benthic food web: a case study of the Swartkops Estuary South Africa
- Authors: Ndoto, Asiphe
- Date: 2024-04
- Subjects: Swartkops River Estuary (South Africa) , Estuarine ecology -- South Africa -- Swartkops River Estuary , Fishes -- Ecology -- South Africa -- Swartkops River Estuary
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64256 , vital:73669
- Description: Estuarine systems are highly productive ecosystems; however, they are subjected to high anthropogenic pressure such as metal contamination and: increased nutrient loads. The contamination sources of metals and nutrients in urban estuaries are derived: from industrial waste. agricultural and urban runoff that flows into estuaries. An example of such a system is the Swartkops Estuary. industry and three wastewater treatment plants within the Swartkops River catchment are major sources of metal. and nutrient pollution, respectively. The metals accumulate in the environment, are biomagnified up the food web, and transferred from one trophic level to another. At lethal concentrations, metals pose a threat to organisms using the estuary by affecting their physiological and biochemical processes. Stable Isotope analysis has proven to be an effective tool for investigating, trophic linkages in the food chain from a variety of environments. By assessing both metals and stable _isotopes in the. estuary it can provide a more robust understanding of the pathway metals accumulate, biomagnified, and transfer from the environment through the estuarine food web. , Thesis (MSc) -- Faculty of Science, School of Environmental Sciences, 2022
- Full Text:
- Date Issued: 2024-04
- Authors: Ndoto, Asiphe
- Date: 2024-04
- Subjects: Swartkops River Estuary (South Africa) , Estuarine ecology -- South Africa -- Swartkops River Estuary , Fishes -- Ecology -- South Africa -- Swartkops River Estuary
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64256 , vital:73669
- Description: Estuarine systems are highly productive ecosystems; however, they are subjected to high anthropogenic pressure such as metal contamination and: increased nutrient loads. The contamination sources of metals and nutrients in urban estuaries are derived: from industrial waste. agricultural and urban runoff that flows into estuaries. An example of such a system is the Swartkops Estuary. industry and three wastewater treatment plants within the Swartkops River catchment are major sources of metal. and nutrient pollution, respectively. The metals accumulate in the environment, are biomagnified up the food web, and transferred from one trophic level to another. At lethal concentrations, metals pose a threat to organisms using the estuary by affecting their physiological and biochemical processes. Stable Isotope analysis has proven to be an effective tool for investigating, trophic linkages in the food chain from a variety of environments. By assessing both metals and stable _isotopes in the. estuary it can provide a more robust understanding of the pathway metals accumulate, biomagnified, and transfer from the environment through the estuarine food web. , Thesis (MSc) -- Faculty of Science, School of Environmental Sciences, 2022
- Full Text:
- Date Issued: 2024-04
Development of a numerical geohydrological model for a fractured rock aquifer in the Karoo, near Sutherland, South Africa
- Authors: Maqhubela, Akhona
- Date: 2024-04
- Subjects: Hydrogeology -- South Africa -- Northern Cape , Groundwater -- South Africa -- North Cape -- Management , Evapotranspiration
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64164 , vital:73658
- Description: The regional scale method in groundwater storage observation introduces uncertainties that hinder the evaluation of the remaining lifespan of depleted aquifers. The scarcity of precipitation data presents significant global challenge, especially in semi-arid regions. This study constructs a regional numerical hydrogeological model that identifies the potential impacts of climate change on the water balance for the South African Gravimetric Observation Station in Sutherland. The purpose of this study is to understand mechanisms controlling groundwater in the fractured rock aquifer. The climate data from the Weather forecast data over the last ten years was collected from the South African Weather Service. and groundwater levels data assessed the potential impacts of climate change on water balance components, especially precipitation and evapotranspiration. Precipitation is the primary recharge parameter in this study and had the highest level recorded in winter, with May having the highest precipitation rates of 24,62mm. The instrument conducted two profile investigations in a single day to detect geological abnormalities at various depths, achieving an impressive accuracy of up to 0.001 mV. The fact that groundwater flows from regions of higher hydraulic heads to areas of lower hydraulic charges, confirms that riverbeds in Sutherland act as preferential conduits for subsurface recharge. The profile and processed geophysical maps show low chances of getting groundwater in this observed area due to extensively great depth, approximately 150 – 210 m. The river package from MODFLOW model shows little inflow to the study nearby well locations. These model results showed a negative difference between water flowing in and out of the system of about -7m3 between 2002 and 2020. Groundwater flows faster at borehole five, where the hydraulic conductivity is large. The resulting regional hydrogeological model offered valuable insights into how climate change might influence the distribution and accessibility of groundwater resources. In the context of Sutherland, a negative groundwater budget value signaled that groundwater extraction or consumption surpassed the natural replenishment or recharge of the aquifer. , Thesis (MSc) -- Faculty of Science, School of Environmental Sciences, 2024
- Full Text:
- Date Issued: 2024-04
- Authors: Maqhubela, Akhona
- Date: 2024-04
- Subjects: Hydrogeology -- South Africa -- Northern Cape , Groundwater -- South Africa -- North Cape -- Management , Evapotranspiration
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64164 , vital:73658
- Description: The regional scale method in groundwater storage observation introduces uncertainties that hinder the evaluation of the remaining lifespan of depleted aquifers. The scarcity of precipitation data presents significant global challenge, especially in semi-arid regions. This study constructs a regional numerical hydrogeological model that identifies the potential impacts of climate change on the water balance for the South African Gravimetric Observation Station in Sutherland. The purpose of this study is to understand mechanisms controlling groundwater in the fractured rock aquifer. The climate data from the Weather forecast data over the last ten years was collected from the South African Weather Service. and groundwater levels data assessed the potential impacts of climate change on water balance components, especially precipitation and evapotranspiration. Precipitation is the primary recharge parameter in this study and had the highest level recorded in winter, with May having the highest precipitation rates of 24,62mm. The instrument conducted two profile investigations in a single day to detect geological abnormalities at various depths, achieving an impressive accuracy of up to 0.001 mV. The fact that groundwater flows from regions of higher hydraulic heads to areas of lower hydraulic charges, confirms that riverbeds in Sutherland act as preferential conduits for subsurface recharge. The profile and processed geophysical maps show low chances of getting groundwater in this observed area due to extensively great depth, approximately 150 – 210 m. The river package from MODFLOW model shows little inflow to the study nearby well locations. These model results showed a negative difference between water flowing in and out of the system of about -7m3 between 2002 and 2020. Groundwater flows faster at borehole five, where the hydraulic conductivity is large. The resulting regional hydrogeological model offered valuable insights into how climate change might influence the distribution and accessibility of groundwater resources. In the context of Sutherland, a negative groundwater budget value signaled that groundwater extraction or consumption surpassed the natural replenishment or recharge of the aquifer. , Thesis (MSc) -- Faculty of Science, School of Environmental Sciences, 2024
- Full Text:
- Date Issued: 2024-04
Development of a numerical geohydrological model for a fractured rock aquifer in the Karoo, near Sutherland, South Africa
- Authors: Maqhubela, Akhona
- Date: 2024-04
- Subjects: Groundwater -- South Africa -- Northern Cape , Hydrogeology -- South Africa -- Northern Cape , Remote sensing , Geographic information systems
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64163 , vital:73659
- Description: The regional scale method in groundwater storage observation introduces uncertainties that hinder the evaluation of the remaining lifespan of depleted aquifers. The scarcity of precipitation data presents significant global challenge, especially in semi-arid regions. This study constructs a regional numerical hydrogeological model that identifies the potential impacts of climate change on the water balance for the South African Gravimetric Observation Station in Sutherland. The purpose of this study is to understand mechanisms controlling groundwater in the fractured rock aquifer. The climate data from the Weather forecast data over the last ten years was collected from the South African Weather Service. and groundwater levels data assessed the potential impacts of climate change on water balance components, especially precipitation and evapotranspiration. Precipitation is the primary recharge parameter in this study and had the highest level recorded in winter, with May having the highest precipitation rates of 24,62mm. The instrument conducted two profile investigations in a single day to detect geological abnormalities at various depths, achieving an impressive accuracy of up to 0.001 mV. The fact that groundwater flows from regions of higher hydraulic heads to areas of lower hydraulic charges, confirms that riverbeds in Sutherland act as preferential conduits for subsurface recharge. The profile and processed geophysical maps show low chances of getting groundwater in this observed area due to extensively great depth, approximately 150 – 210 m. The river package from MODFLOW model shows little inflow to the study nearby well locations. These model results showed a negative difference between water flowing in and out of the system of about -7m3 between 2002 and 2020. Groundwater flows faster at borehole five, where the hydraulic conductivity is large. The resulting regional hydrogeological model offered valuable insights into how climate change might influence the distribution and accessibility of groundwater resources. In the context of Sutherland, a negative groundwater budget value signaled that groundwater extraction or consumption surpassed the natural replenishment or recharge of the aquifer. , Thesis (MSc) -- Faculty of Science, School of Environmental Sciences, 2022
- Full Text:
- Date Issued: 2024-04
- Authors: Maqhubela, Akhona
- Date: 2024-04
- Subjects: Groundwater -- South Africa -- Northern Cape , Hydrogeology -- South Africa -- Northern Cape , Remote sensing , Geographic information systems
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64163 , vital:73659
- Description: The regional scale method in groundwater storage observation introduces uncertainties that hinder the evaluation of the remaining lifespan of depleted aquifers. The scarcity of precipitation data presents significant global challenge, especially in semi-arid regions. This study constructs a regional numerical hydrogeological model that identifies the potential impacts of climate change on the water balance for the South African Gravimetric Observation Station in Sutherland. The purpose of this study is to understand mechanisms controlling groundwater in the fractured rock aquifer. The climate data from the Weather forecast data over the last ten years was collected from the South African Weather Service. and groundwater levels data assessed the potential impacts of climate change on water balance components, especially precipitation and evapotranspiration. Precipitation is the primary recharge parameter in this study and had the highest level recorded in winter, with May having the highest precipitation rates of 24,62mm. The instrument conducted two profile investigations in a single day to detect geological abnormalities at various depths, achieving an impressive accuracy of up to 0.001 mV. The fact that groundwater flows from regions of higher hydraulic heads to areas of lower hydraulic charges, confirms that riverbeds in Sutherland act as preferential conduits for subsurface recharge. The profile and processed geophysical maps show low chances of getting groundwater in this observed area due to extensively great depth, approximately 150 – 210 m. The river package from MODFLOW model shows little inflow to the study nearby well locations. These model results showed a negative difference between water flowing in and out of the system of about -7m3 between 2002 and 2020. Groundwater flows faster at borehole five, where the hydraulic conductivity is large. The resulting regional hydrogeological model offered valuable insights into how climate change might influence the distribution and accessibility of groundwater resources. In the context of Sutherland, a negative groundwater budget value signaled that groundwater extraction or consumption surpassed the natural replenishment or recharge of the aquifer. , Thesis (MSc) -- Faculty of Science, School of Environmental Sciences, 2022
- Full Text:
- Date Issued: 2024-04
Development of the Zirconium-based metal- organic framework UiO-66 for Adsorption-mediated electrochemical sensing of organonitrogen compounds in fuels
- Authors: Mokgohloa, Mathule Collen
- Date: 2024-04
- Subjects: Electrochemical sensors , Quinoline -- synthesis , Pyridine -- Synthesis
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64193 , vital:73663
- Description: The combustion of fuel which contains organonitrogen compounds has led to an increase in atmospheric and environmental levels of nitrogen oxides which are responsible for several environmental, ecological, and human health problems. With increasingly strict environmental regulations and deleterious effects of the nitrogen-containing compounds in fuels, there is a strong need for the removal and detection of nitrogen-containing compounds in fuels to produce fuels with lower levels of nitrogen compounds. The Environmental Protection Agency (EPA) mandated nitrogen content in fossil fuels to be about less than 1 wt%. The existing analytical techniques used for the quantification of nitrogen-containing compounds in fuels include GC-MS, GC-AED, and spectrophotometry. Despite being sensitive and specific, these methods require expensive equipment, highly trained personnel, and time-consuming pre-treatment methods to avoid interferences from similar compounds, and they suffer from analyte loss and inadequate results. Thus, they can only be carried out in the off-site laboratories, hindering them from rapid on-site screening. The metal-organic framework (MOF) UiO-66-NH2 and its composites UiO-66-NH2/GA, and UiO- 66-NH2/GO-NH2 (GA= Graphene aerosol and GO= Graphene oxide) have shown great potentialin the adsorption of organonitrogen compounds like quinoline. However, research in the electrochemical application of these MOFs and their derivatives is limited despite their high surface area, abundant porosity, and increased conductivity. To demonstrate their electrochemical sensing potential, modification of the glassy carbon electrode (GCE) was suggested, which would show a higher degree of association for pyridine and quinoline on modified UiO-66-NH2/GA and UiO-66-NH2/GO-NH2 surfaces, thereby creating a more favourable route for adsorption. This would result in enhanced sensing of pyridine and quinoline in model fuel. Thus, unlike the bare GCE, the fabricated/modified can selectively detect high levels of organonitrogen compounds. In this study, Chapter 3, UiO-66-NH2/GA and UiO-66-NH2/GO-NH2 are prepared via the solvothermal method and then characterized using various spectroscopic and imaging techniques such as Scanning Electron Microscopy (SEM), X-ray Photoelectron Spectroscopy (XPS), Ultraviolet-Visible Spectroscopy (UV-VIS), Thermogravimetric Analysis (TGA), X-ray Development of the Zirconium-based metal- organic framework UiO-66 for Adsorption-mediated electrochemical sensing of organonitrogen compounds in fuels. , Thesis (MSc) -- Faculty of Science, School of Biomolecular & Chemical Sciences, 2024
- Full Text:
- Date Issued: 2024-04
- Authors: Mokgohloa, Mathule Collen
- Date: 2024-04
- Subjects: Electrochemical sensors , Quinoline -- synthesis , Pyridine -- Synthesis
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64193 , vital:73663
- Description: The combustion of fuel which contains organonitrogen compounds has led to an increase in atmospheric and environmental levels of nitrogen oxides which are responsible for several environmental, ecological, and human health problems. With increasingly strict environmental regulations and deleterious effects of the nitrogen-containing compounds in fuels, there is a strong need for the removal and detection of nitrogen-containing compounds in fuels to produce fuels with lower levels of nitrogen compounds. The Environmental Protection Agency (EPA) mandated nitrogen content in fossil fuels to be about less than 1 wt%. The existing analytical techniques used for the quantification of nitrogen-containing compounds in fuels include GC-MS, GC-AED, and spectrophotometry. Despite being sensitive and specific, these methods require expensive equipment, highly trained personnel, and time-consuming pre-treatment methods to avoid interferences from similar compounds, and they suffer from analyte loss and inadequate results. Thus, they can only be carried out in the off-site laboratories, hindering them from rapid on-site screening. The metal-organic framework (MOF) UiO-66-NH2 and its composites UiO-66-NH2/GA, and UiO- 66-NH2/GO-NH2 (GA= Graphene aerosol and GO= Graphene oxide) have shown great potentialin the adsorption of organonitrogen compounds like quinoline. However, research in the electrochemical application of these MOFs and their derivatives is limited despite their high surface area, abundant porosity, and increased conductivity. To demonstrate their electrochemical sensing potential, modification of the glassy carbon electrode (GCE) was suggested, which would show a higher degree of association for pyridine and quinoline on modified UiO-66-NH2/GA and UiO-66-NH2/GO-NH2 surfaces, thereby creating a more favourable route for adsorption. This would result in enhanced sensing of pyridine and quinoline in model fuel. Thus, unlike the bare GCE, the fabricated/modified can selectively detect high levels of organonitrogen compounds. In this study, Chapter 3, UiO-66-NH2/GA and UiO-66-NH2/GO-NH2 are prepared via the solvothermal method and then characterized using various spectroscopic and imaging techniques such as Scanning Electron Microscopy (SEM), X-ray Photoelectron Spectroscopy (XPS), Ultraviolet-Visible Spectroscopy (UV-VIS), Thermogravimetric Analysis (TGA), X-ray Development of the Zirconium-based metal- organic framework UiO-66 for Adsorption-mediated electrochemical sensing of organonitrogen compounds in fuels. , Thesis (MSc) -- Faculty of Science, School of Biomolecular & Chemical Sciences, 2024
- Full Text:
- Date Issued: 2024-04
Development of TiO2 nanostructures with a modified energy band gap for hydrogen extraction
- Authors: Mutubuki, Arnold
- Date: 2024-04
- Subjects: Nanostructures , Nanoscience , Nanochemistry
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64226 , vital:73666
- Description: A rise in fossil fuel depletion has motivated the research towards alternative, cost effective and clean processes for energy production through renewable sources. The scientific community is currently engaged in extensive research to exploit viable, sustainable methods for generating green hydrogen. Titania (TiO2) is historically the most studied photoactive semiconductor material with great potential in photoelectrochemical water splitting (PECWS), following the discovery by Fujishima and Honda in 1972. TiO2 possesses superior physicochemical characteristics and band gap edges, which enables the semiconductor to effectively facilitate the PECWS process. Efforts are still ongoing to explore alternatives for narrowing the optical band gap energy of TiO2, for an efficient photoelectrode. In this research work, open-ended and well-ordered TiO2 nanotubular arrays were synthesised by a three-step anodization process. The third anodization was crucial to detach the TiO2 thin film from an opaque Ti metal substrate. The free-standing thin films were transferred and pasted onto conductive FTO-coated glass substrates transparent to visible light and annealed at 400 ℃ for crystallisation. The multi-step anodization has shown an improved top tube morphology by eliminating an initiation TiO2 mesh formed when a conventional single-step anodization process is used under similar conditions. To widen the absorption range of the samples, CuO nanosheets were deposited onto nanotubular TiO2/FTO films through successive ionic layer adsorption (SILAR), a wet chemical method. The formation of a CuO/TiO2 nanostructure enhances the transfer of photogenerated carriers, suppressing charge recombination. This research focused on investigating the influence of selected SILAR parameters on the formation of CuO nanostructures. The first was the effect of precursor concentration on the structural, morphological and optical properties of the CuO/TiO2/FTO nanostructured photoelectrode. The effect of the precursor concentration on the structure and morphology was evident in the X-ray diffraction (XRD) patterns and scanning electron microscopy (SEM) micrographs. Crystallite sizes of deposited CuO increased from 10.6 nm to 15.7 nm when precursor concentration was varied from 0.02 M to 0.10 M. The UV-visible absorbance results show that an increase in precursor concentration leads to a red shift of both the peak absorbance and edge wavelength of the CuO/TiO2/FTO absorbance spectra. This phenomenon is believed to be caused by the presence of CuO, which exhibits active absorption in the visible spectrum. As evidenced by the study, the continued increase in precursor concentration does not result in a further widening of the absorption band. This is demonstrated by the example of a CuO/TiO2/FTO sample decorated with a 0.2 M precursor. The second was the effect of SILAR immersion cycles on the properties of the CuO/TiO2/FTO nanostructure developed. The increase in the number of immersion cycles led to a notable progression in the adsorption cupric oxide on the TiO2/FTO samples. A redshift in the absorbance peak and edge wavelength is observed in the UV-visible spectra of CuO/TiO2/FTO photoelectrode. The efficacy of the SILAR technique in modifying the absorption band of nanotubular TiO2 thin films has been conclusively demonstrated through comprehensive analysis and correlation of the relationships between the structure and optical properties, as evidenced by the XRD patterns, Raman spectra, SEM, TEM micrographs, and UV-visible absorbance spectra. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2024
- Full Text:
- Date Issued: 2024-04
- Authors: Mutubuki, Arnold
- Date: 2024-04
- Subjects: Nanostructures , Nanoscience , Nanochemistry
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64226 , vital:73666
- Description: A rise in fossil fuel depletion has motivated the research towards alternative, cost effective and clean processes for energy production through renewable sources. The scientific community is currently engaged in extensive research to exploit viable, sustainable methods for generating green hydrogen. Titania (TiO2) is historically the most studied photoactive semiconductor material with great potential in photoelectrochemical water splitting (PECWS), following the discovery by Fujishima and Honda in 1972. TiO2 possesses superior physicochemical characteristics and band gap edges, which enables the semiconductor to effectively facilitate the PECWS process. Efforts are still ongoing to explore alternatives for narrowing the optical band gap energy of TiO2, for an efficient photoelectrode. In this research work, open-ended and well-ordered TiO2 nanotubular arrays were synthesised by a three-step anodization process. The third anodization was crucial to detach the TiO2 thin film from an opaque Ti metal substrate. The free-standing thin films were transferred and pasted onto conductive FTO-coated glass substrates transparent to visible light and annealed at 400 ℃ for crystallisation. The multi-step anodization has shown an improved top tube morphology by eliminating an initiation TiO2 mesh formed when a conventional single-step anodization process is used under similar conditions. To widen the absorption range of the samples, CuO nanosheets were deposited onto nanotubular TiO2/FTO films through successive ionic layer adsorption (SILAR), a wet chemical method. The formation of a CuO/TiO2 nanostructure enhances the transfer of photogenerated carriers, suppressing charge recombination. This research focused on investigating the influence of selected SILAR parameters on the formation of CuO nanostructures. The first was the effect of precursor concentration on the structural, morphological and optical properties of the CuO/TiO2/FTO nanostructured photoelectrode. The effect of the precursor concentration on the structure and morphology was evident in the X-ray diffraction (XRD) patterns and scanning electron microscopy (SEM) micrographs. Crystallite sizes of deposited CuO increased from 10.6 nm to 15.7 nm when precursor concentration was varied from 0.02 M to 0.10 M. The UV-visible absorbance results show that an increase in precursor concentration leads to a red shift of both the peak absorbance and edge wavelength of the CuO/TiO2/FTO absorbance spectra. This phenomenon is believed to be caused by the presence of CuO, which exhibits active absorption in the visible spectrum. As evidenced by the study, the continued increase in precursor concentration does not result in a further widening of the absorption band. This is demonstrated by the example of a CuO/TiO2/FTO sample decorated with a 0.2 M precursor. The second was the effect of SILAR immersion cycles on the properties of the CuO/TiO2/FTO nanostructure developed. The increase in the number of immersion cycles led to a notable progression in the adsorption cupric oxide on the TiO2/FTO samples. A redshift in the absorbance peak and edge wavelength is observed in the UV-visible spectra of CuO/TiO2/FTO photoelectrode. The efficacy of the SILAR technique in modifying the absorption band of nanotubular TiO2 thin films has been conclusively demonstrated through comprehensive analysis and correlation of the relationships between the structure and optical properties, as evidenced by the XRD patterns, Raman spectra, SEM, TEM micrographs, and UV-visible absorbance spectra. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2024
- Full Text:
- Date Issued: 2024-04
Dislocation imaging of AISI316L stainless steels using electron channeling contrast imaging (ECCI)
- Pullen, Luchian Charton Morne
- Authors: Pullen, Luchian Charton Morne
- Date: 2024-04
- Subjects: Electron microscopy , Microscopy -- Technique
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64301 , vital:73674
- Description: This study investigates the use of electron microscopy to image dislocations in high-temperature steels used in the electrical power generation industry. Dislocations play an important role in the mechanical properties of steels, which continuously evolve during component manufacturing and subsequent in-service exposure due to creep and/or fatigue. The dislocation density of the steels can potentially be used as a fingerprint to identify at-risk components that has either reached end-of-life or that was incorrectly manufactured due to forming or heat treatments. Traditionally, dislocation measurements are performed using transmission electron microscopy (TEM) performed on thin foils samples. However, accurate and precise measurements of the dislocation density in steels using TEM remain a challenge due to the time-consuming nature, small sampling volumes, and effects of sample preparation on the quantitative results. The aim of this study is to evaluate and establish electron channeling contrast imaging (ECCI) as a scanning electron microscopy method of quantifying the dislocation densities of power plant steels. This method can be applied to conventionally polished bulk samples allowing for large areas to be sampled. Samples consisting of AISI316L stainless steel were used as a model alloy (large grain size ~100 μm) to compare dislocation imaging using annular dark field (ADF)-scanning TEM (STEM) and ECCI. Three materials states consisting of a cold drawn rod (high dislocation density), annealed rod (low dislocation density), and an annealed sample subjected to cyclic fatigue testing (medium dislocation density) were investigated. Systematic investigations into the data acquisition parameters showed that an incident beam energy (20 kV), beam current (~4 nA), pixel size (5 nm), and working distance (4-5 mm) on a JEOL7001F SEM fitted with a retractable BSE detector could successfully image the dislocation structures for the material states used in this study. The ECCI technique was successfully used to determine the dislocation density in the three material states and the quantitative results showed similar trends as the ADF-STEM quantification results, but with less effort. Future studies using electron backscattered diffraction (EBSD) orientation mapping combined with electron channeling pattern (ECP) calibrations using a single crystal Si sample will allow for ECCI imaging under controlled grain orientations. Furthermore, accurate image segmentation of dislocations from a micrograph remains a key limitation, which can be improved with the use of advanced image analysis based on deep learning approaches. The quantitative dislocation density techniques demonstrated in this study can be adapted not only for studies of other power plant steels (eg. 9-12% Cr Creep Strength Enhanced Ferritic) but also to other materials systems such as aluminium to study the recrystallization processes during annealing. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2025
- Full Text:
- Date Issued: 2024-04
- Authors: Pullen, Luchian Charton Morne
- Date: 2024-04
- Subjects: Electron microscopy , Microscopy -- Technique
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64301 , vital:73674
- Description: This study investigates the use of electron microscopy to image dislocations in high-temperature steels used in the electrical power generation industry. Dislocations play an important role in the mechanical properties of steels, which continuously evolve during component manufacturing and subsequent in-service exposure due to creep and/or fatigue. The dislocation density of the steels can potentially be used as a fingerprint to identify at-risk components that has either reached end-of-life or that was incorrectly manufactured due to forming or heat treatments. Traditionally, dislocation measurements are performed using transmission electron microscopy (TEM) performed on thin foils samples. However, accurate and precise measurements of the dislocation density in steels using TEM remain a challenge due to the time-consuming nature, small sampling volumes, and effects of sample preparation on the quantitative results. The aim of this study is to evaluate and establish electron channeling contrast imaging (ECCI) as a scanning electron microscopy method of quantifying the dislocation densities of power plant steels. This method can be applied to conventionally polished bulk samples allowing for large areas to be sampled. Samples consisting of AISI316L stainless steel were used as a model alloy (large grain size ~100 μm) to compare dislocation imaging using annular dark field (ADF)-scanning TEM (STEM) and ECCI. Three materials states consisting of a cold drawn rod (high dislocation density), annealed rod (low dislocation density), and an annealed sample subjected to cyclic fatigue testing (medium dislocation density) were investigated. Systematic investigations into the data acquisition parameters showed that an incident beam energy (20 kV), beam current (~4 nA), pixel size (5 nm), and working distance (4-5 mm) on a JEOL7001F SEM fitted with a retractable BSE detector could successfully image the dislocation structures for the material states used in this study. The ECCI technique was successfully used to determine the dislocation density in the three material states and the quantitative results showed similar trends as the ADF-STEM quantification results, but with less effort. Future studies using electron backscattered diffraction (EBSD) orientation mapping combined with electron channeling pattern (ECP) calibrations using a single crystal Si sample will allow for ECCI imaging under controlled grain orientations. Furthermore, accurate image segmentation of dislocations from a micrograph remains a key limitation, which can be improved with the use of advanced image analysis based on deep learning approaches. The quantitative dislocation density techniques demonstrated in this study can be adapted not only for studies of other power plant steels (eg. 9-12% Cr Creep Strength Enhanced Ferritic) but also to other materials systems such as aluminium to study the recrystallization processes during annealing. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2025
- Full Text:
- Date Issued: 2024-04
Elephant impacts on plant diversity and structure in the Shamwari Private Game Reserve
- Authors: Halvey, Andrew Lloyd
- Date: 2024-04
- Subjects: Elephants -- Nutrition -- South Africa -- Eastern Cape , Elephants -- Habitat -- South Africa -- Eastern Cape , Shamwari Game Reserve (South Africa)
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/63777 , vital:73597
- Description: Many African landscapes rely on processes such as fire, tree-fall and drought in addition to herbivores to initiate change across the landscape. In the Eastern Cape, elephant have a significant impact on the community structure and diversity of the vegetation they live in. This is most likely the case for the Albany Valley Thicket and azonal riparian vegetation of Shamwari Private Game Reserve, where browsing animals, particularly megaherbivores like the black rhinoceros and elephant, are the main cause of defoliation. The presence of large herbivores creates challenges when it comes to long-term sustainability and biodiversity of the vegetation in Shamwari. Vegetation monitoring provides essential information for effective management of megaherbivores not only in Shamwari but in many other similar reserves. The aim of this study was to design a monitoring plan for the Albany Valley Thicket and riparian vegetation in Shamwari using available vegetation metrics. The vegetation was measured in permanent plots (90 m line intercept analysis per plot) in the Albany Valley Thicket and riparian vegetation of Shamwari. Plot selection was based on thicket structural integrity using NDVI score as a proxy. In all plots, thicket structure was assessed using canopy heights measured every 50 cm along the line. Detrended correspondence analysis of the species abundance data suggested three distinct structural and compositional vegetation states for thicket and riparian vegetation: dense, intermediate and open. Significant relationships between NDVI and vegetation structural metrics across the condition states indicated that NDVI could be used as a proxy for vegetation condition. Vegetation compositional metrics, however, were not always correlated to NDVI and determining species diversity for the vegetation presents additional information useful for monitoring. The monitoring recommended for the reserve is to evaluate vegetation structural integrity annually in summer using NDVI. Areas of change could then be measured for diversity as well as for change in the abundance of selected plant indicator species. This information should be used to initiate management actions if unwanted change has occurred. , Thesis (MSc) -- Faculty of Science, School of Environmental Sciences, 2024
- Full Text:
- Date Issued: 2024-04
- Authors: Halvey, Andrew Lloyd
- Date: 2024-04
- Subjects: Elephants -- Nutrition -- South Africa -- Eastern Cape , Elephants -- Habitat -- South Africa -- Eastern Cape , Shamwari Game Reserve (South Africa)
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/63777 , vital:73597
- Description: Many African landscapes rely on processes such as fire, tree-fall and drought in addition to herbivores to initiate change across the landscape. In the Eastern Cape, elephant have a significant impact on the community structure and diversity of the vegetation they live in. This is most likely the case for the Albany Valley Thicket and azonal riparian vegetation of Shamwari Private Game Reserve, where browsing animals, particularly megaherbivores like the black rhinoceros and elephant, are the main cause of defoliation. The presence of large herbivores creates challenges when it comes to long-term sustainability and biodiversity of the vegetation in Shamwari. Vegetation monitoring provides essential information for effective management of megaherbivores not only in Shamwari but in many other similar reserves. The aim of this study was to design a monitoring plan for the Albany Valley Thicket and riparian vegetation in Shamwari using available vegetation metrics. The vegetation was measured in permanent plots (90 m line intercept analysis per plot) in the Albany Valley Thicket and riparian vegetation of Shamwari. Plot selection was based on thicket structural integrity using NDVI score as a proxy. In all plots, thicket structure was assessed using canopy heights measured every 50 cm along the line. Detrended correspondence analysis of the species abundance data suggested three distinct structural and compositional vegetation states for thicket and riparian vegetation: dense, intermediate and open. Significant relationships between NDVI and vegetation structural metrics across the condition states indicated that NDVI could be used as a proxy for vegetation condition. Vegetation compositional metrics, however, were not always correlated to NDVI and determining species diversity for the vegetation presents additional information useful for monitoring. The monitoring recommended for the reserve is to evaluate vegetation structural integrity annually in summer using NDVI. Areas of change could then be measured for diversity as well as for change in the abundance of selected plant indicator species. This information should be used to initiate management actions if unwanted change has occurred. , Thesis (MSc) -- Faculty of Science, School of Environmental Sciences, 2024
- Full Text:
- Date Issued: 2024-04