Evaluating and improving morpho-syntactic classification over multiple corpora using pre-trained, off-the-shelf, parts-of-speech tagging tools reviewed article
- Glass, Kevin R, Bangay, Shaun D
- Authors: Glass, Kevin R , Bangay, Shaun D
- Date: 2008
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/433427 , vital:72969 , https://hdl.handle.net/10520/EJC28053
- Description: This paper evaluates six commonly available parts-of-speech tagging tools over corpora other than those upon which they were originally trained. In particular this investigation measures the performance of the selected tools over varying styles and genres of text without retraining, under the assumption that domain specific training data is not always available. An investigation is performed to determine whether improved results can be achieved by combining the set of tagging tools into ensembles that use voting schemes to determine the best tag for each word. It is found that while accuracy drops due to non-domain specific training, and tag-mapping between corpora, accuracy remains very high, with the support vector machine-based tagger, and the decision tree-based tagger performing best over different corpora. It is also found that an ensemble containing a support vector machine-based tagger, a probabilistic tagger, a decision-tree based tagger and a rule-based tagger produces the largest increase in accuracy and the largest reduction in error across different corpora, using the Precision-Recall voting scheme.
- Full Text:
- Date Issued: 2008
- Authors: Glass, Kevin R , Bangay, Shaun D
- Date: 2008
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/433427 , vital:72969 , https://hdl.handle.net/10520/EJC28053
- Description: This paper evaluates six commonly available parts-of-speech tagging tools over corpora other than those upon which they were originally trained. In particular this investigation measures the performance of the selected tools over varying styles and genres of text without retraining, under the assumption that domain specific training data is not always available. An investigation is performed to determine whether improved results can be achieved by combining the set of tagging tools into ensembles that use voting schemes to determine the best tag for each word. It is found that while accuracy drops due to non-domain specific training, and tag-mapping between corpora, accuracy remains very high, with the support vector machine-based tagger, and the decision tree-based tagger performing best over different corpora. It is also found that an ensemble containing a support vector machine-based tagger, a probabilistic tagger, a decision-tree based tagger and a rule-based tagger produces the largest increase in accuracy and the largest reduction in error across different corpora, using the Precision-Recall voting scheme.
- Full Text:
- Date Issued: 2008
A lightwave 3d plug-in for modeling long hair on virtual humans
- Patrick, Deborah, Bangay, Shaun D
- Authors: Patrick, Deborah , Bangay, Shaun D
- Date: 2003
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/432953 , vital:72916 , https://doi.org/10.1145/602330.602360
- Description: Multimedia applications today make use of virtual humans. Generating realistic virtual humans is a challenging problem owing to a number of factors, one being the simulation of realistic hair. The difficulty in simulating hair is due to the physical properties of hair. The average human head holds thousands of hairs, with the width of each hair often smaller than the size of a pixel. There are also complex lighting effects that occur within hair. This paper presents a LightWave 3D plug-in for modeling thousands of individual hairs on virtual humans. The plug-in allows the user to specify the length, thickness and distribution of the hair, as well as the number of segments a hair is made up of. The plug-in is able to add hairs to a head model, which the user then modifies to define a hairstyle. The hairs are then multiplied by the plug-in to produce many hairs. By providing a plug-in that does most of the work and produces realistic results, the user is able to produce a hairstyle without modeling each individual strand of hair. This greatly reduces the time spent on hair modeling, and makes the possibility of adding realistic long hair to virtual humans reasonable.
- Full Text:
- Date Issued: 2003
- Authors: Patrick, Deborah , Bangay, Shaun D
- Date: 2003
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/432953 , vital:72916 , https://doi.org/10.1145/602330.602360
- Description: Multimedia applications today make use of virtual humans. Generating realistic virtual humans is a challenging problem owing to a number of factors, one being the simulation of realistic hair. The difficulty in simulating hair is due to the physical properties of hair. The average human head holds thousands of hairs, with the width of each hair often smaller than the size of a pixel. There are also complex lighting effects that occur within hair. This paper presents a LightWave 3D plug-in for modeling thousands of individual hairs on virtual humans. The plug-in allows the user to specify the length, thickness and distribution of the hair, as well as the number of segments a hair is made up of. The plug-in is able to add hairs to a head model, which the user then modifies to define a hairstyle. The hairs are then multiplied by the plug-in to produce many hairs. By providing a plug-in that does most of the work and produces realistic results, the user is able to produce a hairstyle without modeling each individual strand of hair. This greatly reduces the time spent on hair modeling, and makes the possibility of adding realistic long hair to virtual humans reasonable.
- Full Text:
- Date Issued: 2003
Rendering optimisations for stylised sketching
- Winnemöller, Holger, Bangay, Shaun D
- Authors: Winnemöller, Holger , Bangay, Shaun D
- Date: 2003
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/432922 , vital:72913 , https://doi.org/10.1145/602330.602353
- Description: We present work that specifically pertains to the rendering stage of stylised, non-photorealistic sketching. While a substantial body of work has been published on geometric optimisations, surface topologies, space-algorithms and natural media simulation, rendering-specific issues are rarely discussed in-depth even though they are often acknowledged. We investigate the most common stylised sketching approaches and identify possible rendering optimisations. In particular, we define uncertainty-functions, which are used to describe a human-error component, discuss how these pertain to geometric perturbation and textured silhouette sketching and explain how they can be cached to improve performance. Temporal coherence, which poses a problem for textured silhouette sketching, is addressed by means of an easily computed visibility-function. Lastly, we produce an effective yet surprisingly simple solution to seamless hatching, which commonly presents a large computational overhead, by using 3-D textures in a novel fashion. All our optimisations are cost-effective, easy to implement and work in conjunction with most existing algorithms.
- Full Text:
- Date Issued: 2003
- Authors: Winnemöller, Holger , Bangay, Shaun D
- Date: 2003
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/432922 , vital:72913 , https://doi.org/10.1145/602330.602353
- Description: We present work that specifically pertains to the rendering stage of stylised, non-photorealistic sketching. While a substantial body of work has been published on geometric optimisations, surface topologies, space-algorithms and natural media simulation, rendering-specific issues are rarely discussed in-depth even though they are often acknowledged. We investigate the most common stylised sketching approaches and identify possible rendering optimisations. In particular, we define uncertainty-functions, which are used to describe a human-error component, discuss how these pertain to geometric perturbation and textured silhouette sketching and explain how they can be cached to improve performance. Temporal coherence, which poses a problem for textured silhouette sketching, is addressed by means of an easily computed visibility-function. Lastly, we produce an effective yet surprisingly simple solution to seamless hatching, which commonly presents a large computational overhead, by using 3-D textures in a novel fashion. All our optimisations are cost-effective, easy to implement and work in conjunction with most existing algorithms.
- Full Text:
- Date Issued: 2003
Geometric approximations towards free specular comic shading
- Winnemöller, Holger, Bangay, Shaun D
- Authors: Winnemöller, Holger , Bangay, Shaun D
- Date: 2002
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/433453 , vital:72971 , https://doi.org/10.1111/1467-8659.00590
- Description: We extend the standard solution to comic rendering with a comic‐style specular component. To minimise the computational overhead associated with this extension, we introduce two optimising approximations; the perspective correction angle and the vertex face‐orientation measure. Both of these optimisations are generally applicable, but they are especially well suited for applications where a physically correct lighting simulation is not required. Using our optimisations we achieve performances comparable to the standard solution. As our approximations favour large models, we even outperform the standard approach for models consisting of 10,000 triangles or more, which we can render exceeding 40 frames per second, including the specular component.
- Full Text:
- Date Issued: 2002
- Authors: Winnemöller, Holger , Bangay, Shaun D
- Date: 2002
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/433453 , vital:72971 , https://doi.org/10.1111/1467-8659.00590
- Description: We extend the standard solution to comic rendering with a comic‐style specular component. To minimise the computational overhead associated with this extension, we introduce two optimising approximations; the perspective correction angle and the vertex face‐orientation measure. Both of these optimisations are generally applicable, but they are especially well suited for applications where a physically correct lighting simulation is not required. Using our optimisations we achieve performances comparable to the standard solution. As our approximations favour large models, we even outperform the standard approach for models consisting of 10,000 triangles or more, which we can render exceeding 40 frames per second, including the specular component.
- Full Text:
- Date Issued: 2002
A Prototyping Environment for Investigating Context Aware Wearable Applications.
- Tsegaye, Melekam, Bangay, Shaun D, Terzoli, Alfredo
- Authors: Tsegaye, Melekam , Bangay, Shaun D , Terzoli, Alfredo
- Date: 1999
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/432783 , vital:72900 , https://www.cs.ru.ac.za/research/g98t4414/static/papers/wearprototsegaye05.pdf
- Description: In this paper we introduce the concept of a contextaware, wearable application prototyping environment, which can be used to support research into new wearable applications. We also present an initial specification for such an environment and show how different types of sensors can be modelled to produce data that describes a given context scenario using our prototyping approach.
- Full Text:
- Date Issued: 1999
- Authors: Tsegaye, Melekam , Bangay, Shaun D , Terzoli, Alfredo
- Date: 1999
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/432783 , vital:72900 , https://www.cs.ru.ac.za/research/g98t4414/static/papers/wearprototsegaye05.pdf
- Description: In this paper we introduce the concept of a contextaware, wearable application prototyping environment, which can be used to support research into new wearable applications. We also present an initial specification for such an environment and show how different types of sensors can be modelled to produce data that describes a given context scenario using our prototyping approach.
- Full Text:
- Date Issued: 1999
- «
- ‹
- 1
- ›
- »