Mathematical Statistics and Actuarial ScienceNo Descriptionhttp://hdl.handle.net/11660/1052024-04-23T16:40:02Z2024-04-23T16:40:02Z241Assessing the willingness of rural homeowners to insure their homes in South Africa using multilevel modellingSkenjana, Samkelehttp://hdl.handle.net/11660/124242024-02-15T03:00:37Z2023-01-01T00:00:00Zdc.title: Assessing the willingness of rural homeowners to insure their homes in South Africa using multilevel modelling
dc.contributor.author: Skenjana, Samkele
dc.description.abstract: There has been an increase in urbanisation in the last decade as more South African seeks better work opportunities in the urban areas. Despite this notable increase, there are individuals who still prefer to build houses and reside in rural areas. There are reasons why people in South Africa have opted to invest in properties in rural areas. Firstly, the process to obtain land in rural areas is through a traditional leader or chief of the village (much easier than in the urban areas). Secondly, the cost of the land in the rural areas is significantly lower than the cost of land in the urban areas. As valid as these reasons are, they have drawbacks such as owners of the not having a title deed for their land and an accurate amount of the value of their land. This makes it difficult to insure these homes. The absence of a rural home insurance in South Africa that focuses on these rural homeowners fitting the description above was the driving force behind the need for this study. Literature thus far has been focused on agricultural and crop insurance in rural areas. This study will explore the challenges in the rural insurance market in South Africa and factors affecting the willingness of these rural residents to insure their rural homes.
dc.description: Dissertation (M.Sc. (Actuarial Science))--University of the Free State, 2023
2023-01-01T00:00:00ZQuantile regression analysis of modifiable and non-modifiable predictors of stroke among adults in South AfricaChikobvu, DelsonMatizirofa, Lynn'shttp://hdl.handle.net/11660/118072023-11-02T20:10:07Z2021-01-01T00:00:00Zdc.title: Quantile regression analysis of modifiable and non-modifiable predictors of stroke among adults in South Africa
dc.contributor.author: Chikobvu, Delson; Matizirofa, Lynn's
dc.description.abstract: Background:
Stroke is the second largest cause of mortality and long-term disability in South Africa (SA). Stroke is a multifactorial disease regulated by modifiable and non-modifiable predictors. Little is known about the stroke predictors in SA, particularly modifiable and non-modifiable. Identification of stroke predictors using appropriate statistical methods can help formulate appropriate health programs and policies aimed at reducing the stroke burden. This study aims to address important gaps in stroke literature i.e., identifying and quantifying stroke predictors through quantile regression analysis.
Methods:
A cross-sectional hospital-based study was used to identify and quantify stroke predictors in SA using 35730 individual patient data retrieved from selected private and public hospitals between January 2014 and December 2018. Ordinary logistic regression models often miss critical aspects of the relationship that may exist between stroke and its predictors. Quantile regression analysis was used to model the effects of each predictor on stroke distribution.
Results:
Of the 35730 cases of stroke, 22183 were diabetic. The dominant stroke predictors were diabetes, hypertension, heart problems, the female gender, higher age groups and black race. The age group 55-75 years, female gender and black race, had a bigger effect on stroke distribution at the lower upper quantiles. Diabetes, hypertension and cholesterol showed a significant impact on stroke distribution (p < 0.0001).
Conclusion:
Most strokes are attributable to modifiable factors. Study findings will be used to raise awareness of modifiable predictors to prevent strokes. Regular screening and treatment are recommended for high-risk individuals with identified predictors in SA.
2021-01-01T00:00:00ZSoil fertilization synergistically enhances the impact of pollination services in increasing seed yield of sunflower under dryland conditionsAdelabu, Dollop BolaBredenhand, EmileVan der Merwe, SeanFranke, Angelinus Corneliushttp://hdl.handle.net/11660/116872022-06-20T07:26:11Z2021-01-01T00:00:00Zdc.title: Soil fertilization synergistically enhances the impact of pollination services in increasing seed yield of sunflower under dryland conditions
dc.contributor.author: Adelabu, Dollop Bola; Bredenhand, Emile; Van der Merwe, Sean; Franke, Angelinus Cornelius
dc.description.abstract: To exploit the potential of ecological intensification during sunflower cropping, it is crucial to understand the potential synergies between crop management and ecosystem services. We therefore examined the effect of pollination intensification on sunflower yield and productivity under various levels of soil fertilization over two seasons in the eastern Free State, South Africa. We manipulated soil fertility with fertilizer applications and pollination with exclusion bags. We found a synergetic effect between pollination and soil fertilization whereby increasing pollination intensity led to a far higher impact on sunflower yield when the soil had been fertilized. Specifically, the intensification of insect pollination increased seed yield by approximately 0.4 ton/ha on nutrient poor soil and by approximately 1.7 ton/ha on moderately fertilized soil. Our findings suggest that sunflower crops on adequate balanced soil fertility will receive abundant insect pollination and may gain more from both synergies than crops grown in areas with degraded soil fertility.
2021-01-01T00:00:00ZModelling international tourist arrivals volatility in Zimbabwe using a GARCH processMakoni, TendiChikobvu, Delsonhttp://hdl.handle.net/11660/112372021-09-02T12:48:28Z2021-01-01T00:00:00Zdc.title: Modelling international tourist arrivals volatility in Zimbabwe using a GARCH process
dc.contributor.author: Makoni, Tendi; Chikobvu, Delson
dc.description.abstract: The aim of the paper was to develop bootstrap prediction intervals for international tourism demand and volatility in Zimbabwe after modelling with an ARMA-GARCH process. ARMA-GARCH models have better forecasting power and are capable of capturing and quantifying volatility. Bootstrap prediction intervals can account for future uncertainty that arises through parameter estimation. The monthly international tourism data obtained from the Zimbabwe Tourism Authority (ZTA) (January 2000 to June 2017) is neither seasonal nor stationary and is made stationery by taking a logarithm transformation. An ARMA(1,1) model fits well to the data; with forecasts indicating a slow increase in international tourist arrivals (outside of the Covid-19 period). The GARCH(1,1) process indicated that unexpected tourism shocks will significantly impact the Zimbabwe international tourist arrivals for longer durations. Volatility bootstrap prediction intervals indicated minimal future uncertainty in international tourist arrivals. For the Zimbabwe tourism industry to remain relevant, new tourism products and attraction centres need to be developed, as well as embarking on effective marketing strategies to lure even more tourists from abroad. This will go a long way in increasing the much-needed foreign currency earnings needed to revive the Zimbabwean economy.
2021-01-01T00:00:00ZExotic equity derivatives: a comparison of pricing models and methods with both stochastic volatility and interest ratesScheltema, Jaundrehttp://hdl.handle.net/11660/76652024-02-14T08:19:50Z2017-01-01T00:00:00Zdc.title: Exotic equity derivatives: a comparison of pricing models and methods with both stochastic volatility and interest rates
dc.contributor.author: Scheltema, Jaundre
dc.description.abstract: The traditional Black Scholes methodology for exotic equity option pricing fails to capture the features of latent stochastic volatility and observed stochastic interest rate factors exhibited in financial markets today. The detailed study presented here shows how these shortcomings of the Black Scholes methodology have been addressed in literature by examining some of the developments of stochastic volatility models with constant and stochastic interest rates.
A subset of these models, notably with models developed within the last two years, are then compared in a simulated study design against a complex Market Model. Each of the select models were chosen as “best” representatives of their respective model class. The Market Model, which is specified through a system of Stochastic Differential Equations, is taken as a proxy for real world market dynamics. All of the select models are calibrated against the Market Model using a technique known as Differential Evolution, which is a globally convergent stochastic optimiser, and then used to price exotic equity options.
The end results show that the Heston-Hull-CIR Model (H2CIR) outperforms the alternative Double Heston and 4/2 Models respectively in producing exotic equity option prices closest to the Market Model. Various other commentaries are also given to assess each of the select models with respect to parameter stability, computational run times and robustness in implementation, with the final conclusions supporting the H2CIR Model in preference over the other models.
Additionally a second research question is also investigated that relates to Monte Carlo pricing methods. Here the Monte Carlo pricing schemes used under the Black Scholes and other pricing methodologies is extended to present a semi-exact simulation scheme built on the results from literature. This new scheme is termed the Brownian Motion Reconstruction scheme and is shown to outperform the Euler scheme when pricing exotic equity derivatives with relatively few monitoring or option exercise dates.
Finally, a minor result in this study involves a new alternative numerical method to recover transition density functions from their respective characteristic functions and is shown to be competitive against the popular Fast Fourier Transform method.
It is hoped that the results in this thesis will assist investment and banking practitioners to obtain better clarity when assessing and vetting different models for use in the industry, and extend the current range of techniques that are used to price options.
dc.description: Dissertation (M.Sc. (Actuarial Science))--University of the Free State, 2017
2017-01-01T00:00:00ZActuarial risk management of investment guarantees in life insuranceBekker, Kobus Nelhttp://hdl.handle.net/11660/73562021-09-01T09:01:03Z2010-11-01T00:00:00Zdc.title: Actuarial risk management of investment guarantees in life insurance
dc.contributor.author: Bekker, Kobus Nel
dc.description.abstract: Investment guarantees in life insurance business have generated a lot of research in recent years
due to the earlier mispricing of such products. These guarantees generally take the form of exotic
options and are therefore difficult to price analytically, even in a simplified setting. A possible
solution to the risk management problem of investment guarantees contingent on death and
survival is proposed through the use of a conditional lower bound approximation of the corresponding
embedded option value. The derivation of the conditional lower bound approximation
is outlined in the case of regular premiums with asset-based charges and the implementation is
illustrated in a Black-Scheles-Merton setting. The derived conditional lower bound approximation
also facilitates verifying economic scenario generator based pricing and valuation, as well as
sensitivity measures for hedging solutions.
2010-11-01T00:00:00ZParametric and nonparametric Bayesian statistical inference in animal sciencePretorius, Albertus Lodewikushttp://hdl.handle.net/11660/62802021-09-01T08:15:56Z2000-11-01T00:00:00Zdc.title: Parametric and nonparametric Bayesian statistical inference in animal science
dc.contributor.author: Pretorius, Albertus Lodewikus
dc.description.abstract: Chapter 1 illustrated an extension of the Gibbs sampler to solve problems arising in animal
breeding theory. Formulae were derived and presented to implement the Gibbs sampler where-after
marginal densities, posterior means, modes and credibility intervals were obtained from the Gibbs
sampler.
In the Bayesian Method of Moment chapter we have illustrated how this approach, based on a few
relatively weak assumptions, is used to obtain maximum entropy densities, realized error terms and
future values of the parameters for the mixed linear model. Given the data, it enables researchers to
compute post data densities for parameters and future observations if the form of the likelihood
function is unknown. On introducing and proving simple assumptions relating to the moments of the
realized error terms and the future, as yet unobserved error terms, we derived post-data moments of
parameters and future values of the dependent variable. Using these moments as side conditions,
proper maxent densities for the model parameters were derived and could easily be computed. It was
also shown that in the computed example, where use was made of the Gibbs sampler to compute
finite sample post-data parameter densities, some BMOM maxent densities were very similar to the
traditional Bayesian densities, whilst others were not.
It should be appreciated that the BMOM approach yielded useful inverse inferences without using
assumed likelihood functions, prior densities for their parameters and Bayes' theorem, also it was the
case that the BMOM techniques extended in the present thesis to the mixed linear model provided
valuable and significant solutions in applying traditional likelihood or Bayesian analysis in animal
breeding problems.
The important contribution of Charter 3 and 4 revolved around the nonparametrie modeling of the
random effects. We have applied a general technique for Bayesian nonparametries to this important
class of models, the mixed linear model for animal breeding experiments. Our technique involved
specifying a non parametric prior for the distribution of the random effects and a Dirichlet process
prior on the space of prior distributions for that nonparametric prior. The mixed linear model was
then fitted with a Gibbs sampler, which turned an analytical intractable multidimensional integration
problem into a feasible numerical one, overcoming most of the computational difficulties usually
experience with the Dirichlet process.
This proposed procedure also represented a new application of the mixture of Dirichlet process
model to problems arising from animal breeding experiment. The application to and discussion of
the breeding experiment from Kenya was helpful for understanding the importance and utility of the
Dirichlet process, and inference for all the mixed linear model parameters. However, as mentioned
before, a substantial statistical issue that still remains to be tackled is the great discrepancy between
resulting posterior densities of the random effects as the value of the precision parameter, M changes.
We believe that Bayesian nonparametries have much to offer, and can be applied to a wide range of
statistical procedures. In addition to the Dirichlet Process Prior, we will look in the future at other
non parametric priors like the Pólya tree priors and Bernoulli trips.
Whilst our feeling in the final chapter was that study of performance of non-informative was
certainly to be encouraged, we have found the group reference priors to generally be high
satisfactory, and felt reasonably confident in using them in situations in which further study was
impossible. Results from the different theorems yielded that the group orderings of the mixed model parameters are very important since different orderings will frequently result in different reference
priors. This dependencél of the reference prior on the group chosen and their ordering was
unavoidable. Our motivation and idea for the reference prior was basically to choose the prior, which
in a certain asymptotic sense maximized the information in the posterior that was provided by the
data.
The thesis has surveyed a range of current research in the area of Bayesian parametric and
nonparametrie inference in animal science. The work is ongoing and several problems remain
unresolved. In particular, more work is required in the following areas: a full Bayesian
nonparametrie analysis involving covariate information; multivariate priors based on stochastic
processes; multivariate error models involving Pólya trees; developing exchangeable processes to
cover a larger class of problems and nonparametric sensitivity analysis.
2000-11-01T00:00:00ZAspects of Bayesian change-point analysisSchoeman, Anita Carinahttp://hdl.handle.net/11660/62722021-08-31T21:30:58Z2000-11-01T00:00:00Zdc.title: Aspects of Bayesian change-point analysis
dc.contributor.author: Schoeman, Anita Carina
dc.description.abstract: English: In chapter one we looked at the nature of structural change and defined structural change as
a change in one or more parameters of the model in question. Bayesian procedures can be
applied to solve inferential problems of structural change. Among the various methodological
approaches within Bayesian inference, emphasis is put on the analysis of the posterior distribution
itself, since the posterior distribution can be used for conducting hypothesis testing
as well as obtaining a point estimate. The history of structural change in statistics, beginning
in the early 1950's, is also discussed. Furthermore the Bayesian approach to hypothesis
testing was developed by Jeffreys (1935, 1961), where the centerpiece was a number, now
called the Bayes factor, which is the posterior odds of the null hypothesis when the prior
probability on the null is one-half. According to Kass and Raftery (1993) this posterior odds
= Bayes factor x prior odds and the Bayes factor is the ratio of the posterior odds of Hl to
its prior odds, regardless of the value of the prior odds. The intrinsic and fractional Bayes
factors are defined and some advantages and disadvantages of the IBF's are discussed.
In chapter two changes in the multivariate normal model are considered. Assuming that
a change has taken place, one will want to be able to detect the change and to estimate
its position as well as the other parameters of the model. To do a Bayesian analysis, prior
densities should be chosen. Firstly the hyperparameters are assumed known, but as this
is not. usually true, vague improper priors are used (while the number of change-point.s is
fixed). Another way of dealing with the problem of unknown hyperparameters is to use
a hierarchical model where the second stage priors are vague. We also considered Gibbs
sampling and gave the full conditional distributions for all the cases. The three cases that
are studied is
(1) a change in the mean with known or unknown variance,
(2) a change in the mean and variance by firstly using independent prior densities on the
different variances and secondly assuming the variances to be proportional and
(3) a change in the variance.
The same models above are also considered when the number of change-points are unknown.
In this case vague priors are not appropriate when comparing models of different dimensions.
In this case we revert to partial Bayes factors, specifically the intrinsic and fractional Bayes
factors, to obtain the posterior probabilities of the number of change-points. Furthermore
we look at component analysis, i.e. determining which components of a multivariate variable
are mostly responsible for the changes in the parameters. The univariate case is then
also considered in more detail, including multiple model comparisons and models with auto
correlated errors. A summary of approaches in the literature as well as four examples are
included.
In chapter three changes in the linear model, with
(1) a change in the regression coefficient and a constant variance,
(2) a change in only the variance and
(3) a change in the regression coefficient and the variance, are considered. Bayes factors
for the above mentioned cases, multiple change-points, component analysis, switchpoint
(continuous change-point) and auto correlation are included, together with seven
examples.
In chapter four changes in some other standard models are considered. Bernoulli type
experiments include the Binomial model, the Negative binomial model, the Multinomial
model and the Markov chain model. Exponential type models include the Poisson model,
the Gamma model and the Exponential model. Special cases of the Exponential model
include the left truncated exponential model and the Exponential model with epidemic
change. In all cases the partial Bayes factor is used to obtain posterior probabilities when
the number of change-points is unknown. Marginal posterior densities of all parameters
under the change-point model are derived. Eleven examples are included.
In chapter five change-points in the hazard rate are studied. This includes an abrupt change
in a constant hazard rate as well as a change from a decreasing hazard rate to a constant
hazard rate or a change from a constant hazard rate to an increasing hazard rate. These
hazard rates are obtained from combinations of Exponential and Weibull density functions.
In the same way a bathtub hazard rate can also be constructed. Two illustrations are given.
Some concluding remarks are made in chapter six, with discussions of other approaches in
the literature and other possible applications not dealt with in this study.
2000-11-01T00:00:00ZA Bayesian analysis of multiple interval-censored failure time events with application to AIDS dataMokgatlhe, Luckyhttp://hdl.handle.net/11660/62022021-09-01T08:39:50Z2003-05-01T00:00:00Zdc.title: A Bayesian analysis of multiple interval-censored failure time events with application to AIDS data
dc.contributor.author: Mokgatlhe, Lucky
dc.description.abstract: English: The measure of time to event (failure) for units on longitudinal clinical visits
cannot always be ascertained exactly. Instead only time intervals within which the
event occurred may be recorded. That being the case, each unit's failure will be
described by a single interval resulting in grouped interval data over the sample.
Yet, due to non-compliance to visits by some units, failure will be described by
endpoints within which the event has occurred. These endpoints may encompass
several intervals, hence overlapping intervals across units. Furthermore, some
units may not realize the event of interest within the preset duration of study,
hence are censored. Finally, several events of interest can be investigated on a
single unit resulting in several failure times that inevitably are dependent. All
these prescribe an interval-censored survival data with multiple-failure times.
Three models of analysing interval-censored survival data with two failure times
were applied to four sets of data. For the distribution free methods, Cox's hazard
with either a log-log transform or logit transform on the baseline conditional
survival probabilities was used to derive the likelihood. The Independence
assumption model (lW) work under the assumption that the lifetimes are
independent and any dependence exists through the use of common covariates.
The second model that do not necessarily assume independence, computes the
joint failure probabilities for two lifetimes by Bayes' rule of conditioning on the
interval of failure for one lifetime, hence Conditional Bivariate model (CB). The use
of Clayton and Farley-Morgenstern bivariate Copulas (CC) with inbuilt
dependence parameter was the other model. For parametric models the IW and CC
methods were applied to the data sets on the assumption that the marginal
distribution of the lifetimes is Weibull.
The traditional classical estimation method of Newton-Raphson was used to find
optimum parameter estimates and their variances stabilized using a sandwich
estimator, where possible. Bayesian methods combine the data with prior
information. Thus for either transforms, two proper priors were derived, of which
their combination with the likelihood resulted in a posterior function. To estimate
the entire distribution of a parameter from non-standard posterior functions, two
Markov Chain Monte Carlo (MCMC) methods were used. The Gibbs Sampler
method samples in turn observations from the conditional distribution of a
parameter in question, while holding other parameters constant. For intractably
complex posterior functions, the Metropolis-Hastings method of sampling vectors
of parameter values in blocks from a Multivariate Normal proposal density was
used.
The analysis of ACTG175data revealed that increase in levels of HIV RNA precede
decline in CD4 cell counts. There is a strong dependence between the two failure
times, hence restricting the use of the independence model. The most preferred
models are using copulas and the conditional bivariate model. It was shown that
ARV's actually improves a patient's lifetime at varying rates, with combination
treatment performing better. The worrying issue is the resistance that HIV virus
develops against the drugs. This is evidenced by the adverse effect the previous
use of ARV's has on patients, in that a new drug used on them has less effect.
Finally it is important that patients start therapy at early stages since patients
displaying signs of AIDS at entry respond negatively to drugs.; Afrikaans: Tyd tot 'n gebeurtenis (faling) van eenhede op 'n regiment van longitudinale
kliniese besoeke kan nie altyd presies bepaal word nie. Gewoonlik kan net 'n
interval waarin 'n gebeurtenis plaasgevind het bepaal word. In so 'n geval word
elke eenheid se faling beskryf deur 'n enkele interval wat lei tot gegroepeerde data
oor die hele steekproef. Verder, as gevolg van nie-nakornings van besoeke deur
sommige eenhede, kan die falings slegs beskryf word deur die eindpunte van die
tydperk waarin die gebeurtenis plaasgevind het. Hierdie eindpunte mag verskeie
intervalle insluit, en kryons dus oorvleuende tydperke oor eenhede. Verder mag
die gebeurtenis van belang by sekere eenhede nie plaasvind binne die
voorafbepaalde tydperk van studie nie, en is dus gesensoreerd. Laastens, verskeie
gebeurtenisse van belang kan ondersoek word op In enkele eenheid. Die resultaat
is dan meervoudige falingstye wat uiteraard dan afhanklik is. Bogenoemde
situasie word dan beskryf as interval-gesensoreerde oorlewingsdata met
meervoudige falingstye.
Drie modelle vir die analise van interval-gesensoreerde oorlewingsdata met twee'
falingstye is toegepas op vier data stele. Vir verdelingsvrye metodes is Cox se
gevaarfunksie met of 'n log-log transformasie of 'n logit transformasie op die
basislyn voorwaardelike oorlewingswaarskynlikhede gebruik om die
aanneemlikheidsfunksie af te lei. Die Onafhanklikheidsaanname model (IW) neem
aan dat die leeftye inherent onafhanklik is en dat afhanklikheid slegs ingebring
word deur gemeenskaplike koveranderlikes. Die tweede model aanvaar nie
onafhanklikheid nie, maar bereken die gesamentlike falingswaarskynlikhede deur
die voorwaardelike waarskynlikheid vir die interval van een leeftyd gegee die
ander leeftyd se interval, te bereken. Dit is die Voorwaardelike tweeveranderlike
model (CB). Die Clayton en Farley-Morgenstern tweeveranderlike Copulas (CC)
met ingeboude afhanklikheidsparameters is die derde model. Vir parametriese
modelle is die lW en CC metodes toegepas op die data onder die aanname dat die
randverdelings van die leeftye Weibull is.
Die traditionele klassieke beramingsmetode van Newton-Raphson is gebruik om
die optimale beramers of modus van die afgeleide aanneernlikheidsfunksie te vind
waar moontlik. Bayes metodes kombineer die data met a priori informasie. Vir elk
van die twee transformasies is twee nie-inligtende prior verdelings algelei, wat se
kombinasie met die aanneemlikheidsfunksie lei tot 'n posterior funksie. Om die
volledige verdeling van 'n parameter te beraam uit nie-standaard posterior
funksies is twee Markov Ketting Monte Carlo (MCMC) metodes gebruik. Die
Gibbs steekproefnemingsmetode neem waarnemings uit die voorwaardelike
verdeling van 'n parameter, gegee die ander parameters. Vir nie-standaard
komplekse posterior funksies is die Metropolis-Hastings metode gebruik deur In
vector van moontlike parameter waardes in 'n blok uit 'n surrogaat verdeling te
trek.
Die analise van ACTG175 dui aan dat toename in vlakke van MIV RNS die afname
van CD4 sell tellings voorafgaan. Daar is In sterk afhanklikheid tussen die twee
falingstye. wat dus die gebruik van die onafhanklikheidsaanname model beperk.
Die meer aanvaarbare modelle gebruik copulas en ook die voorwaardelike
tweeveranderlike model. Dit is aangetoon dat die gebruik van ARV In pasiënt se
leeftyd kan verleng, met kombinasie behandelings wat die beste resultate gee. In
Sorgwekkende resultaat is dat die MIV virus In weerstand teen die middels
ontwikkel. Dit blyk uit die nadelige effek wat vorige gebruik van ARV op In
pasiënt het, deurdat In nuwe middel dan minder effek het. Laastens is dit
belangrik dat In pasiënt op In vroë stadium behandeling begin aangesien pasiënte
wat al tekens van VIGS wys negatief kan reageer op behandeling.
2003-05-01T00:00:00ZBayesian control charts based on predictive distributionsVan Zyl, Ruaanhttp://hdl.handle.net/11660/45582021-08-31T16:49:45Z2016-01-01T00:00:00Zdc.title: Bayesian control charts based on predictive distributions
dc.contributor.author: Van Zyl, Ruaan
dc.description.abstract: English: Control charts are statistical process control (SPC) tools that are widely used in
the monitoring of processes, specifically taking into account stability and dispersion.
Control charts signal when a significant change in the process being studied
is observed. This signal can then be investigated to identify issues and to find solutions.
It is generally accepted that SPC are implemented in two phases, Phase
I and Phase II. In Phase I the primary interest is assessing process stability, often
trying to bring the process in control by locating and eliminating any assignable
causes, estimating any unknown parameters and setting up the control charts. After
that the process move on to Phase II where the control limits obtained in Phase
I are used for online process monitoring based on new samples of data. This thesis
concentrate mainly on implementing a Bayesian approach to monitoring processes
using SPC. This is done by providing an overview of some non-informative priors
and then to specifically derive the reference and probability-matching priors for the
common coefficient of variation, standardized mean and tolerance limits for a normal
population. Using the Bayesian approach described in this thesis SPC is performed,
including derivations of control limits in Phase I and monitoring by the use of runlengths
and average run-lengths in Phase II for the common coefficient of variation,
standardized mean, variance and generalized variance, tolerance limits for normal
populations, two-parameter exponential distribution, piecewise exponential model
and capability indices. Results obtained using the Bayesian approach are compared
to frequentist results.; Afrikaans: Beheer kaarte is statistiese beheer prosesse wat gebruik word om prosesse te monitor,
deur veral na stabiliteit en verspreiding te kyk. Beheer kaarte gee ’n waarskuwingsein
as daar ’n bedeidende verandering in die proses wat bestudeer word opgemerk
word. Hierdie sein kan dan ondersoek word om probleme te identifiseer en op te
los. Dit word oor die algemeen aanvaar dat statististiese beheer prosesse in twee
fases geimplementeer word. In Fase I word die stabiliteit van die proses geasseseer
en die proses word in beheer gebring deur redes vir probleme te identifseer en op te
los, onbekende parameters word bepaal en die beheer kaarte word opgestel. In Fase
II word die beheer limiete wat in Fase I bereken is gebruik deur ’n voortdurende
proses te monitor met nuwe data. Hierdie proefskrif handel grotendeels oor die implementeering
van ’n Bayesiaanse metode om statistiese beheer toe te pas. Dit word
gedoen deur nie-objektiewe priors te bereken, meer spesifiek die verwysingsprior en
die waarskynlikheidsooreenstemmende prior te bereken vir die algemene koeffisient
van variasie, die gestandardiseerde gemiddelde en toleransie limiete vir ’n normale
populasie. Deur die gebruik van die Bayes metode uiteen gesit in hierdie proefskrif,
insluitend die berekeninge van beheer limiete in Fase I en die monitering deur
gebruik te maak van proses-lengte en gemidelde proses-lengte in Fase II vir die algemene
koeffisient van variasie, gestandardiseerde gemiddelde, variansie en algemene
variansie, toleransie limiete vir die normale populasie, twee-parameter eksponensiele
verdeling, stuksgewysde eksponensiele model en vermoë indekse. Resultate deur die
Bayes proses is dan vergelyk met resultate uit die klassieke statistiek.
2016-01-01T00:00:00Z