Science Publishing Group: American Journal of Theoretical and Applied Statistics: Table of Contents
<i> American Journal of Theoretical and Applied Statistics (AJTAS) </i> publishes papers developing and analyzing new methods for any field of statistics. It is expected that the papers give interesting and novel contributions to statistical theory and its applications at a good mathematical level. The results should be presented in form of theorems together with their mathematical proofs, which should not be merely routine calculations. Additionally, a discussion of the results and their value for the theory or for applications could be a valuable addition, as well as numerical results on the efficiency or examples for the applicability of the theoretical results.
http://www.sciencepublishinggroup.com/j/ajtas Science Publishing Group: American Journal of Theoretical and Applied Statistics: Table of Contents
Science Publishing Group
en-US
American Journal of Theoretical and Applied Statistics
American Journal of Theoretical and Applied Statistics
http://image.sciencepublishinggroup.com/journal/146.gif
http://www.sciencepublishinggroup.com/j/ajtas
Empirical Bayes Estimators of Parameter and Reliability Function for Compound Rayleigh Distribution under Record Data.
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20120101.12
Based on the record samples, the empirical Bayes estimators of parameter and reliability function for Compound Rayleigh distribution is investigated under the symmetric and asymmetric loss function. In this case the symmetric loss function is squared error and for the asymmetric loss functions, we consider LINEX and general Entropy loss function.
Based on the record samples, the empirical Bayes estimators of parameter and reliability function for Compound Rayleigh distribution is investigated under the symmetric and asymmetric loss function. In this case the symmetric loss function is squared error and for the asymmetric loss functions, we consider LINEX and general Entropy loss function.
Empirical Bayes Estimators of Parameter and Reliability Function for Compound Rayleigh Distribution under Record Data.
doi:10.11648/j.ajtas.20120101.12
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
Omid Shojaee
Reza Azimi
Manoochehr Babanezhad
Empirical Bayes Estimators of Parameter and Reliability Function for Compound Rayleigh Distribution under Record Data.
1
1
15
15
2014-01-01
2014-01-01
10.11648/j.ajtas.20120101.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20120101.12
© Science Publishing Group
Influential Authorities for Vaccination Policies and Barriers to Implementing Standing Orders for Influenza Vaccination among Nursing Facilities in 14 States, 2000-2002.
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20120101.11
To assess barriers to implementing standing order protocols (SOP) for vaccinations and influential authorities in making vaccination decisionswith the proportion of black residents, and vaccination coverage in nursing homes. The Centers for Medicare & Medicaid Services and the Centers for Disease Control and Prevention in 2000-2002 surveyed approximately 280 nursing homes in14 states. Data from the On-line Survey and Certification Reporting System were included. A demonstration project to adopt SOPs for vaccination and to assess barriers. Factor analysis and structural equation models were used to assess relationships ofbarriers and influential authorities to implementing SOPs. External facility concerns are barriers to implementing SOPs (p=.031), and nursing homes with higher proportions of black residents are more likely to report those concerns. The medical director and the facility administrator are the most influential authorities determining whether SOPs are implemented. The Quality Improvement Organization (QIO) and the state certification sur-veyor also played important roles in influencing staff making vaccination decisions. The state’s QIO and the state certification surveyor may play important roles in addressingconcerns about staff’s authority to vaccinate under SOPs.Barriers external to the nursing home may play a more important role than internal facility barriers.
To assess barriers to implementing standing order protocols (SOP) for vaccinations and influential authorities in making vaccination decisionswith the proportion of black residents, and vaccination coverage in nursing homes. The Centers for Medicare & Medicaid Services and the Centers for Disease Control and Prevention in 2000-2002 surveyed approximately 280 nursing homes in14 states. Data from the On-line Survey and Certification Reporting System were included. A demonstration project to adopt SOPs for vaccination and to assess barriers. Factor analysis and structural equation models were used to assess relationships ofbarriers and influential authorities to implementing SOPs. External facility concerns are barriers to implementing SOPs (p=.031), and nursing homes with higher proportions of black residents are more likely to report those concerns. The medical director and the facility administrator are the most influential authorities determining whether SOPs are implemented. The Quality Improvement Organization (QIO) and the state certification sur-veyor also played important roles in influencing staff making vaccination decisions. The state’s QIO and the state certification surveyor may play important roles in addressingconcerns about staff’s authority to vaccinate under SOPs.Barriers external to the nursing home may play a more important role than internal facility barriers.
Influential Authorities for Vaccination Policies and Barriers to Implementing Standing Orders for Influenza Vaccination among Nursing Facilities in 14 States, 2000-2002.
doi:10.11648/j.ajtas.20120101.11
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
Barbara Bardenheier
Abigail Shefer
Stefan Gravenstein
Carolyn Furlow
Carol J. Rowland Hogue
Influential Authorities for Vaccination Policies and Barriers to Implementing Standing Orders for Influenza Vaccination among Nursing Facilities in 14 States, 2000-2002.
1
1
11
11
2014-01-01
2014-01-01
10.11648/j.ajtas.20120101.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20120101.11
© Science Publishing Group
Time Series Outlier Analysis of Tea Price Data
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130201.11
In this article Autoregressive Integrated Moving Average (ARIMA) models were fitted and outliers are identified for the auction price of tea in three regions- North India, South India and All India. The ARIMA models with seasonal differencing are found to be quite appropriate for the data. The region specific dynamics are distinctly assessed based on the autocorrelation functions. Further we are concerned with outliers in time series with two special cases, additive outlier (AO) and innovational outlier (IO).These outliers have been detected using two recent methods and conclusions drawn based on the data pertaining to the three regions. The reason for these types of outliers in the tea price have been further identified pointing towards the factors of environmental, weather conditions, pest attacks etc.
In this article Autoregressive Integrated Moving Average (ARIMA) models were fitted and outliers are identified for the auction price of tea in three regions- North India, South India and All India. The ARIMA models with seasonal differencing are found to be quite appropriate for the data. The region specific dynamics are distinctly assessed based on the autocorrelation functions. Further we are concerned with outliers in time series with two special cases, additive outlier (AO) and innovational outlier (IO).These outliers have been detected using two recent methods and conclusions drawn based on the data pertaining to the three regions. The reason for these types of outliers in the tea price have been further identified pointing towards the factors of environmental, weather conditions, pest attacks etc.
Time Series Outlier Analysis of Tea Price Data
doi:10.11648/j.ajtas.20130201.11
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
S. D. Krishnarani
Time Series Outlier Analysis of Tea Price Data
2
1
6
6
2014-01-01
2014-01-01
10.11648/j.ajtas.20130201.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130201.11
© Science Publishing Group
Comparative Analysis of Bayesian Control Chart Estimation and Conventional Multivariate Control Chart
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130201.12
Bayesian model or Beta-binomial conjugate using Bayesian sequential estimation method to estimate the proportion of different age groups is compared with the conventional multivariate control chart method. The parameters for the techniques were derived and applied. The result shows that the patients between the ages of 15-44 in 2009 and 44-64 and 64 and above in 2011 are out of control. This implies the Bayesian sequential estimation method is very efficient to notice any small shift that occurs among patients that make use of the hospital. Also the bracket mentioned above was very high among the people that used the hospital compared to others. The result of 2011shows that there was a high shift in the ages of the patients that attended the hospital for the ages between 44-64 and 64 and above respectively.
Bayesian model or Beta-binomial conjugate using Bayesian sequential estimation method to estimate the proportion of different age groups is compared with the conventional multivariate control chart method. The parameters for the techniques were derived and applied. The result shows that the patients between the ages of 15-44 in 2009 and 44-64 and 64 and above in 2011 are out of control. This implies the Bayesian sequential estimation method is very efficient to notice any small shift that occurs among patients that make use of the hospital. Also the bracket mentioned above was very high among the people that used the hospital compared to others. The result of 2011shows that there was a high shift in the ages of the patients that attended the hospital for the ages between 44-64 and 64 and above respectively.
Comparative Analysis of Bayesian Control Chart Estimation and Conventional Multivariate Control Chart
doi:10.11648/j.ajtas.20130201.12
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
Johnson Ademola Adewara1
J. Ademola
Ogundeji K. Rotimi
Comparative Analysis of Bayesian Control Chart Estimation and Conventional Multivariate Control Chart
2
1
11
11
2014-01-01
2014-01-01
10.11648/j.ajtas.20130201.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130201.12
© Science Publishing Group
Women Empowerment as an Essential Tool for National Transformation: Niger State, Nigeria Experience
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130202.11
Gender statistics is an area that cuts across traditional fields of statistics to identify, produce and disseminate statistics that reflect the realities of the lives of women and men, and policy issues relating to gender. This paper studies women empowerment as a powerful instrument for national transformation and sustained economic growth. Attempt was made at giving the experience of inclusion of women in Niger State, Nigeria in such areas as leadership activity, economic opportunity, political empowerment, educational attainment and health involvement. It is recommended that there should be changes in all sectors to promote gender equality and transform poor rural women into successful business managers.
Gender statistics is an area that cuts across traditional fields of statistics to identify, produce and disseminate statistics that reflect the realities of the lives of women and men, and policy issues relating to gender. This paper studies women empowerment as a powerful instrument for national transformation and sustained economic growth. Attempt was made at giving the experience of inclusion of women in Niger State, Nigeria in such areas as leadership activity, economic opportunity, political empowerment, educational attainment and health involvement. It is recommended that there should be changes in all sectors to promote gender equality and transform poor rural women into successful business managers.
Women Empowerment as an Essential Tool for National Transformation: Niger State, Nigeria Experience
doi:10.11648/j.ajtas.20130202.11
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
A. Isah
L. A. Nafiu
Women Empowerment as an Essential Tool for National Transformation: Niger State, Nigeria Experience
2
2
14
14
2014-01-01
2014-01-01
10.11648/j.ajtas.20130202.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130202.11
© Science Publishing Group
Information Theoretic Models for Dependence Analysis And missing Data Estimation
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130202.12
In the present communication information theoretic dependence measure has been defined using maximum entropy principle, which measures amount of dependence among the attributes in a contingency table. A relation between information theoretic measure of dependence and Chi-square statistic has been discussed. A generalization of this information theoretic dependence measure has been also studied. In the end Yate’s method and maximum entropy estimation of missing data in design of experiment have been described and illustrated by considering practical problems with empirical data.
In the present communication information theoretic dependence measure has been defined using maximum entropy principle, which measures amount of dependence among the attributes in a contingency table. A relation between information theoretic measure of dependence and Chi-square statistic has been discussed. A generalization of this information theoretic dependence measure has been also studied. In the end Yate’s method and maximum entropy estimation of missing data in design of experiment have been described and illustrated by considering practical problems with empirical data.
Information Theoretic Models for Dependence Analysis And missing Data Estimation
doi:10.11648/j.ajtas.20130202.12
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
D. S. Hooda
Permil Kumar
Information Theoretic Models for Dependence Analysis And missing Data Estimation
2
2
20
20
2014-01-01
2014-01-01
10.11648/j.ajtas.20130202.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130202.12
© Science Publishing Group
Linear Scale Dilation of Asset Returns
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130202.15
Comparing the order statistics of daily returns of the S&P 500 index from 03.01.1950 to 04.03.2013 with the corresponding rankits, a linear scale dilation is observed. This observation is used to derive a five-parameter density function for the parsimonious description of the unconditional distribution of stock returns. The typical graph of this density function looks like a wizard's hat. Its signature feature is the discontinuity at zero.
Comparing the order statistics of daily returns of the S&P 500 index from 03.01.1950 to 04.03.2013 with the corresponding rankits, a linear scale dilation is observed. This observation is used to derive a five-parameter density function for the parsimonious description of the unconditional distribution of stock returns. The typical graph of this density function looks like a wizard's hat. Its signature feature is the discontinuity at zero.
Linear Scale Dilation of Asset Returns
doi:10.11648/j.ajtas.20130202.15
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
E. Reschenhofer
Linear Scale Dilation of Asset Returns
2
2
41
41
2014-01-01
2014-01-01
10.11648/j.ajtas.20130202.15
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130202.15
© Science Publishing Group
Generalized Estimation of Missing Observations in Nonlinear Time Series Model Using State Space Representation
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130202.13
The aim of the study was to formulate a Time Series Model to be used in obtaining optimal estimates of miss-ing observations. State space models and Kalman filter were used to handle irregularly spaced data. A non-Bayesian ap-proach where the missing values were treated as fixed parameters. Simulated AR (1) data and corresponding estimated missing values were generated using a computer programme. Values were withheld and then estimated as though they were missing. The results revealed that simple exposition of state space representation for commonly used Time Series Models can be formulated.
The aim of the study was to formulate a Time Series Model to be used in obtaining optimal estimates of miss-ing observations. State space models and Kalman filter were used to handle irregularly spaced data. A non-Bayesian ap-proach where the missing values were treated as fixed parameters. Simulated AR (1) data and corresponding estimated missing values were generated using a computer programme. Values were withheld and then estimated as though they were missing. The results revealed that simple exposition of state space representation for commonly used Time Series Models can be formulated.
Generalized Estimation of Missing Observations in Nonlinear Time Series Model Using State Space Representation
doi:10.11648/j.ajtas.20130202.13
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
Biwott K. Daniel
Odongo O. Leo
Generalized Estimation of Missing Observations in Nonlinear Time Series Model Using State Space Representation
2
2
28
28
2014-01-01
2014-01-01
10.11648/j.ajtas.20130202.13
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130202.13
© Science Publishing Group
Discordancy in Reduced Dimensions of Outliers in High-Dimensional Datasets: Application of an Updating Formula
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130202.14
In multivariate outlier studies, the sum of squares and cross-product (SSCP) is an important property of the data matrix. For example, the much used Mahalanobis distance and the Wilk's ratio make use of SSCP matrices. One of the SSCP matrices involved in outlier studies is the matrix for the set of multiple outliers in the data. In this paper, an explicit expression for this matrix is derived. It has then been shown that in general the discordancy of multiple outliers is preserved along Multiple-Outlier Displaying Components with much lower dimensions than the original high-dimensional dataset.
In multivariate outlier studies, the sum of squares and cross-product (SSCP) is an important property of the data matrix. For example, the much used Mahalanobis distance and the Wilk's ratio make use of SSCP matrices. One of the SSCP matrices involved in outlier studies is the matrix for the set of multiple outliers in the data. In this paper, an explicit expression for this matrix is derived. It has then been shown that in general the discordancy of multiple outliers is preserved along Multiple-Outlier Displaying Components with much lower dimensions than the original high-dimensional dataset.
Discordancy in Reduced Dimensions of Outliers in High-Dimensional Datasets: Application of an Updating Formula
doi:10.11648/j.ajtas.20130202.14
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
B. K. Nkansah
B. K. Gordor
Discordancy in Reduced Dimensions of Outliers in High-Dimensional Datasets: Application of an Updating Formula
2
2
37
37
2014-01-01
2014-01-01
10.11648/j.ajtas.20130202.14
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130202.14
© Science Publishing Group
Using the Markov Chain Monte Carlo Method to Make Inferences on Items of Data Contaminated by Missing Values,
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130203.12
The Markov Chain Monte Carlo (MCMC) is a method that is used to estimate parameters of interest under difficult conditions such as missing data or when underlying distributions do not fit the assumptions of Maximum Likelihood processes. The objective of this process is to find a probability distribution known as a posterior distribution in Bayesian analysis that can be used to estimate target parameters. In this paper, we consider a case where data are contaminated with missing values and therefore need to be adequately handled using missing data techniques before making inferences on them. A review of the mathematics involved in MCMC procedures in the presence of missing data is presented. Furthermore, we use real data to compare inferences made using multiple imputation based on the multivariate normal model (MVN) that uses the MCMC procedure, the case deletion (CD) missing data method that discards subjects with missing values from the analysis, and the fully conditional specification (FCS) multiple imputation method that uses a sequence of regression models to fill in missing values. Assuming that data are missing completely at random (MCAR) on continuous and normally distributed variables, the following findings are obtained: (1) The higher the proportion of missing data on a variable of interest, the more the relationship between that variable and the dependent variable is distorted when all missing data methods are applied. (2) Multiple imputation based methods produce similar estimates which are better than estimates from the case deletion method. (3) At some stage (when the proportion of missing data becomes high), none of the missing data techniques can help to maintain an initially existing relationship between the dependent variable and some of the covariates of interest in the dataset.
The Markov Chain Monte Carlo (MCMC) is a method that is used to estimate parameters of interest under difficult conditions such as missing data or when underlying distributions do not fit the assumptions of Maximum Likelihood processes. The objective of this process is to find a probability distribution known as a posterior distribution in Bayesian analysis that can be used to estimate target parameters. In this paper, we consider a case where data are contaminated with missing values and therefore need to be adequately handled using missing data techniques before making inferences on them. A review of the mathematics involved in MCMC procedures in the presence of missing data is presented. Furthermore, we use real data to compare inferences made using multiple imputation based on the multivariate normal model (MVN) that uses the MCMC procedure, the case deletion (CD) missing data method that discards subjects with missing values from the analysis, and the fully conditional specification (FCS) multiple imputation method that uses a sequence of regression models to fill in missing values. Assuming that data are missing completely at random (MCAR) on continuous and normally distributed variables, the following findings are obtained: (1) The higher the proportion of missing data on a variable of interest, the more the relationship between that variable and the dependent variable is distorted when all missing data methods are applied. (2) Multiple imputation based methods produce similar estimates which are better than estimates from the case deletion method. (3) At some stage (when the proportion of missing data becomes high), none of the missing data techniques can help to maintain an initially existing relationship between the dependent variable and some of the covariates of interest in the dataset.
Using the Markov Chain Monte Carlo Method to Make Inferences on Items of Data Contaminated by Missing Values,
doi:10.11648/j.ajtas.20130203.12
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
I. Karangwa
D. Kotze
Using the Markov Chain Monte Carlo Method to Make Inferences on Items of Data Contaminated by Missing Values,
2
3
53
53
2014-01-01
2014-01-01
10.11648/j.ajtas.20130203.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130203.12
© Science Publishing Group
Estimation of the Expected Period of acquired Tuberculosis to Become a Chronic Tuberculosis
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130203.11
In order to assess hazards pose by chronic tuberculosis, a table similar to a life table is prepared to estimate the expected period of the acquired tuberculosis to become chronic tuberculosis. This is ailment attributable to the Mycobacterium tuberculosis. Confidence bounds for the estimate were also derived. An example is given using a data set from University of Ilorin Teaching Hospital, Ilorin in Kwara State, Nigeria where some tuberculosis patients were monitored over some years. The available data were analyzed using Statistical Package for Social Sciences (SPSS) version 15, Chicago Inc., IL, and USA. The results show that for the potential patient(s), the expected period of developing the Tuberculosis is 6.796 years before the infection become chronic infection (Tuberculosis). Therefore, 95% confidence interval for the estimated period was found to be between 6.7763 and 6.8157. Hence, it is recommended that health policy maker should formulate policies that curb the pandemic of the disease.
In order to assess hazards pose by chronic tuberculosis, a table similar to a life table is prepared to estimate the expected period of the acquired tuberculosis to become chronic tuberculosis. This is ailment attributable to the Mycobacterium tuberculosis. Confidence bounds for the estimate were also derived. An example is given using a data set from University of Ilorin Teaching Hospital, Ilorin in Kwara State, Nigeria where some tuberculosis patients were monitored over some years. The available data were analyzed using Statistical Package for Social Sciences (SPSS) version 15, Chicago Inc., IL, and USA. The results show that for the potential patient(s), the expected period of developing the Tuberculosis is 6.796 years before the infection become chronic infection (Tuberculosis). Therefore, 95% confidence interval for the estimated period was found to be between 6.7763 and 6.8157. Hence, it is recommended that health policy maker should formulate policies that curb the pandemic of the disease.
Estimation of the Expected Period of acquired Tuberculosis to Become a Chronic Tuberculosis
doi:10.11648/j.ajtas.20130203.11
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
O. M. Adetutu
L. A. Nafiu
O. M. Adetutu
L. A. Nafiu
Estimation of the Expected Period of acquired Tuberculosis to Become a Chronic Tuberculosis
2
3
47
47
2014-01-01
2014-01-01
10.11648/j.ajtas.20130203.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130203.11
© Science Publishing Group
Latent growth curve modeling of psychological well-being trajectories
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130203.14
This Paper Proposes Modeling Trajectories of Psychological Well-being Using Latent Growth Curve models (LGCMs). The psychometric scale of the General Health Questionnaire-12 (GHQ-12) is considered. Data from the British Household Panel Survey (BHPS), from years 2003 to 2006 are used. In 1991 Graetz proposed the GHQ-12 as a multidimensional scale, containing three distinct dimensions: anxiety and depression, social dysfunction and loss of confidence. Using such scale, this paper compares a second-order LGCM for the trajectories of a latent factor (measured by these three dimensions) with a LGCM for the trajectories of an overall sum score. Conditional LGCMs are then fitted; sex, age group and perceived health status are considered as the explanatory variables of the growth trajectories. Results show that the model which considers the three dimensions of subjective well-being has a larger explaining capability than the one utilizing the subjective well-being score.
This Paper Proposes Modeling Trajectories of Psychological Well-being Using Latent Growth Curve models (LGCMs). The psychometric scale of the General Health Questionnaire-12 (GHQ-12) is considered. Data from the British Household Panel Survey (BHPS), from years 2003 to 2006 are used. In 1991 Graetz proposed the GHQ-12 as a multidimensional scale, containing three distinct dimensions: anxiety and depression, social dysfunction and loss of confidence. Using such scale, this paper compares a second-order LGCM for the trajectories of a latent factor (measured by these three dimensions) with a LGCM for the trajectories of an overall sum score. Conditional LGCMs are then fitted; sex, age group and perceived health status are considered as the explanatory variables of the growth trajectories. Results show that the model which considers the three dimensions of subjective well-being has a larger explaining capability than the one utilizing the subjective well-being score.
Latent growth curve modeling of psychological well-being trajectories
doi:10.11648/j.ajtas.20130203.14
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
M. Fátima Salgueiro
Joana Malta
Latent growth curve modeling of psychological well-being trajectories
2
3
66
66
2014-01-01
2014-01-01
10.11648/j.ajtas.20130203.14
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130203.14
© Science Publishing Group
Noise and Signal Estimation in MRI: Two-Parametric Analysis of Rice-Distributed Data by Means of the Maximum Likelihood Approach
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130203.15
The paper’s subject is the elaboration of a new approach to image analysis on the basis of the maximum likelihood method. This approach allows to get simultaneous estimation of both the image noise and the signal within the Rician statistical model. An essential novelty and advantage of the proposed approach consists in reducing the task of solving the system of two nonlinear equations for two unknown variables to the task of calculating one variable on the basis of one equation. Solving this task is important in particular for the purposes of the magnetic-resonance images processing as well as for mining the data from any kind of images on the basis of the signal’s envelope analysis. The peculiarity of the consideration presented in this paper consists in the possibility to apply the developed theoretical technique for noise suppression algorithms’ elaboration by means of calculating not only the signal mean value but the value of the Rice distributed signal’s dispersion, as well. From the view point of the computational cost the procedure of the both parameters’ estimation by proposed technique has appeared to be not more complicated than one-parametric optimization. The present paper is accented upon the deep theoretical analysis of the maximum likelihood method for the two-parametric task in the Rician distributed image processing. As the maximum likelihood method is known to be the most precise, its developed two-parametric version can be considered both as a new effective tool to process the Rician images and as a good facility to evaluate the precision of other two-parametric techniques by means of their comparing with the technique proposed in the present paper.
The paper’s subject is the elaboration of a new approach to image analysis on the basis of the maximum likelihood method. This approach allows to get simultaneous estimation of both the image noise and the signal within the Rician statistical model. An essential novelty and advantage of the proposed approach consists in reducing the task of solving the system of two nonlinear equations for two unknown variables to the task of calculating one variable on the basis of one equation. Solving this task is important in particular for the purposes of the magnetic-resonance images processing as well as for mining the data from any kind of images on the basis of the signal’s envelope analysis. The peculiarity of the consideration presented in this paper consists in the possibility to apply the developed theoretical technique for noise suppression algorithms’ elaboration by means of calculating not only the signal mean value but the value of the Rice distributed signal’s dispersion, as well. From the view point of the computational cost the procedure of the both parameters’ estimation by proposed technique has appeared to be not more complicated than one-parametric optimization. The present paper is accented upon the deep theoretical analysis of the maximum likelihood method for the two-parametric task in the Rician distributed image processing. As the maximum likelihood method is known to be the most precise, its developed two-parametric version can be considered both as a new effective tool to process the Rician images and as a good facility to evaluate the precision of other two-parametric techniques by means of their comparing with the technique proposed in the present paper.
Noise and Signal Estimation in MRI: Two-Parametric Analysis of Rice-Distributed Data by Means of the Maximum Likelihood Approach
doi:10.11648/j.ajtas.20130203.15
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
Tatiana V. Yakovleva
Nicolas S. Kulberg
Noise and Signal Estimation in MRI: Two-Parametric Analysis of Rice-Distributed Data by Means of the Maximum Likelihood Approach
2
3
80
80
2014-01-01
2014-01-01
10.11648/j.ajtas.20130203.15
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130203.15
© Science Publishing Group
Definition of Probability Characteristics of the Absolute Maximum of Non-Gaussian Random Processes by Example of Hoyt Process
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130203.13
The technique of a finding of distribution functions of an absolute maximum of non-Gaussian random processes has been illustrated. On an example of Hoyt process the limiting distribution laws of its absolute maximum have been found. By methods of statistical modeling it has been established that the given asymptotic approximations ensure a satisfactory description of the true distributions over a wide range of parameter values of the random process
The technique of a finding of distribution functions of an absolute maximum of non-Gaussian random processes has been illustrated. On an example of Hoyt process the limiting distribution laws of its absolute maximum have been found. By methods of statistical modeling it has been established that the given asymptotic approximations ensure a satisfactory description of the true distributions over a wide range of parameter values of the random process
Definition of Probability Characteristics of the Absolute Maximum of Non-Gaussian Random Processes by Example of Hoyt Process
doi:10.11648/j.ajtas.20130203.13
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
O. V. Chernoyarov
A. V. Salnikova
Ya. A. Kupriyanova
Definition of Probability Characteristics of the Absolute Maximum of Non-Gaussian Random Processes by Example of Hoyt Process
2
3
60
60
2014-01-01
2014-01-01
10.11648/j.ajtas.20130203.13
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130203.13
© Science Publishing Group
Process Optimization for Synthesis of Anti-Tuberculosis Drug Catalyzed by Fluor apatite Supported Potassium Fluoride
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130203.16
The optimization of the synthesis of anti-tuberculosis drug by thia-Michael addition between thiophenol and chalcone catalyzed using activated Fluorapatite supported potassium fluoride (KF/FAP) was studied using a 2 block central composite design including 4 factors (reaction time, solvent volume, catalyst weight and impregnation ratio). The high reactivity and regioelectivity of our catalyst coupled with their ease of use and reduced environmental problems makes them attractive alternatives to homogeneous basic reagents.
The optimization of the synthesis of anti-tuberculosis drug by thia-Michael addition between thiophenol and chalcone catalyzed using activated Fluorapatite supported potassium fluoride (KF/FAP) was studied using a 2 block central composite design including 4 factors (reaction time, solvent volume, catalyst weight and impregnation ratio). The high reactivity and regioelectivity of our catalyst coupled with their ease of use and reduced environmental problems makes them attractive alternatives to homogeneous basic reagents.
Process Optimization for Synthesis of Anti-Tuberculosis Drug Catalyzed by Fluor apatite Supported Potassium Fluoride
doi:10.11648/j.ajtas.20130203.16
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
Younes Abrouki
Abdelkader Anouzla
Hayat Loukili
Rabiaâ Lotfi
Ahmed Rayadh
Abdellah Bahlaoui
Saı̈d Sebti
Driss Zakarya
Mohamed Zahouily
Process Optimization for Synthesis of Anti-Tuberculosis Drug Catalyzed by Fluor apatite Supported Potassium Fluoride
2
3
86
86
2014-01-01
2014-01-01
10.11648/j.ajtas.20130203.16
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130203.16
© Science Publishing Group
Mathematical and Simulation of Lid Driven Cavity Flow at Different Aspect Ratios Using Single Relaxation Time Lattice Boltzmann Technique
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130203.17
In this paper we, consider a restrictions on the choice of relaxation time in single relaxation time (SRT) models, simulation of flows is generally limited base on this technique. In the current study of the SRT lattice Boltzmann equation have been used to simulate lid driven cavity flow at various Reynolds numbers (100-5000) and three aspect ratios, K=1, 1.5 and 4. The point which is vital in convergence of this technique is how the boundary conditions will be implemented. Two kinds of boundary conditions which imply no-slip and constant inlet velocity, imposed in the present work. For square cavity, results show that with increasing the Reynolds number, bottom corner vortices will grow but they won’t merge together. In this case which the aspect ratio equals four, and Reynolds number reaches over 1000, simulations predicted four primary vortices, which have not predicted by previous single relaxation time models. The results have been compared by previous multi relaxation model.
In this paper we, consider a restrictions on the choice of relaxation time in single relaxation time (SRT) models, simulation of flows is generally limited base on this technique. In the current study of the SRT lattice Boltzmann equation have been used to simulate lid driven cavity flow at various Reynolds numbers (100-5000) and three aspect ratios, K=1, 1.5 and 4. The point which is vital in convergence of this technique is how the boundary conditions will be implemented. Two kinds of boundary conditions which imply no-slip and constant inlet velocity, imposed in the present work. For square cavity, results show that with increasing the Reynolds number, bottom corner vortices will grow but they won’t merge together. In this case which the aspect ratio equals four, and Reynolds number reaches over 1000, simulations predicted four primary vortices, which have not predicted by previous single relaxation time models. The results have been compared by previous multi relaxation model.
Mathematical and Simulation of Lid Driven Cavity Flow at Different Aspect Ratios Using Single Relaxation Time Lattice Boltzmann Technique
doi:10.11648/j.ajtas.20130203.17
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
Anil Kumar
S P Agrawal
Mathematical and Simulation of Lid Driven Cavity Flow at Different Aspect Ratios Using Single Relaxation Time Lattice Boltzmann Technique
2
3
93
93
2014-01-01
2014-01-01
10.11648/j.ajtas.20130203.17
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130203.17
© Science Publishing Group
Evaluation of Artificial Neural Networks in Foreign Exchange Forecasting
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130204.11
This study investigates the modeling, description and forecasting of exchange rates of four countries (Great Britain Pound, Japanese Yen, Nigerian Naira and Batswana Pula) using Artificial Neural Network, the objective of this paper is to use ANN to predict the trend of these four currencies. ANN was used in training and learning processes and thereafter the forecast performance was evaluated or measured making use of various loss functions such as root mean square error (RMSE), mean absolute error (MAE), mean absolute error (MAE), mean absolute precision error (MAPE) and Theill inequality coefficient (TIC). The loss functions used are good indicator of measuring the forecast performance of these series, the series with the lowest function gave a best forecast performance. Results show that the ANN is a very effective tool for exchange rate forecasting. Classical statistical methods are unable to efficiently handle the prediction of financial time series due to non-linearity, non-stationarity and high degree of noise. Advanced intelligence techniques have been used in many financial markets to forecast future development of different capital markets. Artificial neural network is a well tested method for financial markets analysis.
This study investigates the modeling, description and forecasting of exchange rates of four countries (Great Britain Pound, Japanese Yen, Nigerian Naira and Batswana Pula) using Artificial Neural Network, the objective of this paper is to use ANN to predict the trend of these four currencies. ANN was used in training and learning processes and thereafter the forecast performance was evaluated or measured making use of various loss functions such as root mean square error (RMSE), mean absolute error (MAE), mean absolute error (MAE), mean absolute precision error (MAPE) and Theill inequality coefficient (TIC). The loss functions used are good indicator of measuring the forecast performance of these series, the series with the lowest function gave a best forecast performance. Results show that the ANN is a very effective tool for exchange rate forecasting. Classical statistical methods are unable to efficiently handle the prediction of financial time series due to non-linearity, non-stationarity and high degree of noise. Advanced intelligence techniques have been used in many financial markets to forecast future development of different capital markets. Artificial neural network is a well tested method for financial markets analysis.
Evaluation of Artificial Neural Networks in Foreign Exchange Forecasting
doi:10.11648/j.ajtas.20130204.11
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
Akintunde Mutairu Oyewale
Evaluation of Artificial Neural Networks in Foreign Exchange Forecasting
2
4
101
101
2014-01-01
2014-01-01
10.11648/j.ajtas.20130204.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130204.11
© Science Publishing Group
A Method for Topographical Estimation of Lake Bottoms by B-Spline Surface
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130204.12
The application of B-spline (Basis spline) surface to the estimation of the lake bottom topography is described.By using the analysis of a bivariate B-spline, the shape of the lake bottom is approximated.According to the validity of the estimation by the bivariate B-spline function the method is applied to the actual data of the lake depth.Surveys over the water area have more difficulties than those on land, and the measurement data are distributed quite irregularly. The locations of the measured data donot exist regularly over the lake.Those locations were distributed along with the wake of the boat on which the sample data were collected. The density of the data is quite high in some small regions and quite low in other wide regions.Based on such irregular data, we tried a statistical estimation.The regularized term with a penalty coefficient makesa proper approximation of the parameters of the B-spline functions. There are many factors, such that the number of knots, the locations of those knots, the number of B-spline functions and the coefficient of penalized term.Appropriate information criterion which has sufficient accuracy and a small amount of computation is applied for determination of the optimal model.
The application of B-spline (Basis spline) surface to the estimation of the lake bottom topography is described.By using the analysis of a bivariate B-spline, the shape of the lake bottom is approximated.According to the validity of the estimation by the bivariate B-spline function the method is applied to the actual data of the lake depth.Surveys over the water area have more difficulties than those on land, and the measurement data are distributed quite irregularly. The locations of the measured data donot exist regularly over the lake.Those locations were distributed along with the wake of the boat on which the sample data were collected. The density of the data is quite high in some small regions and quite low in other wide regions.Based on such irregular data, we tried a statistical estimation.The regularized term with a penalty coefficient makesa proper approximation of the parameters of the B-spline functions. There are many factors, such that the number of knots, the locations of those knots, the number of B-spline functions and the coefficient of penalized term.Appropriate information criterion which has sufficient accuracy and a small amount of computation is applied for determination of the optimal model.
A Method for Topographical Estimation of Lake Bottoms by B-Spline Surface
doi:10.11648/j.ajtas.20130204.12
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
H. Bao
K. Fueda
A Method for Topographical Estimation of Lake Bottoms by B-Spline Surface
2
4
109
109
2014-01-01
2014-01-01
10.11648/j.ajtas.20130204.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130204.12
© Science Publishing Group
On Nonnegative Integer-Valued Lévy Processes and Applications in Probabilistic Number Theory and Inventory Policies
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130205.11
Discrete compound Poisson processes (namely nonnegative integer-valued Lévy processes) have the property that more than one event occurs in a small enough time interval. These stochastic processes produce the discrete compound Poisson distributions. In this article, we introduce ten approaches to prove the probability mass function of discrete compound Poisson distributions, and we obtain seven approaches to prove the probability mass function of Poisson distributions. Finally, we discuss the connection between additive functions in probabilistic number theory and discrete compound Poisson distributions and give a numerical example. Stuttering Poisson distributions (a special case of discrete compound Poisson distributions) are applied to numerical solution of optimal (s, S) inventory policies by using continuous approximation method.
Discrete compound Poisson processes (namely nonnegative integer-valued Lévy processes) have the property that more than one event occurs in a small enough time interval. These stochastic processes produce the discrete compound Poisson distributions. In this article, we introduce ten approaches to prove the probability mass function of discrete compound Poisson distributions, and we obtain seven approaches to prove the probability mass function of Poisson distributions. Finally, we discuss the connection between additive functions in probabilistic number theory and discrete compound Poisson distributions and give a numerical example. Stuttering Poisson distributions (a special case of discrete compound Poisson distributions) are applied to numerical solution of optimal (s, S) inventory policies by using continuous approximation method.
On Nonnegative Integer-Valued Lévy Processes and Applications in Probabilistic Number Theory and Inventory Policies
doi:10.11648/j.ajtas.20130205.11
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
Huiming Zhang
Jiao He
Hanlin Huang
On Nonnegative Integer-Valued Lévy Processes and Applications in Probabilistic Number Theory and Inventory Policies
2
5
121
121
2014-01-01
2014-01-01
10.11648/j.ajtas.20130205.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130205.11
© Science Publishing Group
Optimum Allocation of Multi-Items in Stratified Random Sampling Using Principal Component Analysis
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130205.14
The problem of allocation with more than one characteristic in stratified sampling is conflicting in nature, as the best allocation for one characteristic will not in general be best for others. Some compromise must be reached to obtain an allocation that is efficient for all characteristics. In this study, the allocation of a sample to strata which minimizes cost of investigation, subject to a given condition about the sampling error was considered. The data on four socioeconomic characteristics of 400 heads of households in Abeokuta South and Ijebu North Local Government Areas (LGAs) of Ogun State, Nigeria were investigated. These comprised of 200 households from each LGA. The characteristics were occupation, income, household size and educational level. Optimal allocation in multi-item was developed as a multivariate optimization problem by finding the principal components. This was done by determining the overall linear combinations that concentrates the variability into few variables. From the principal component analysis, it was seen that for both Abeokuta and Ijebu data sets, the variance based on the four characteristics as multivariate is less than that of the variables when considered as a univariate. From the results, it was seen that there was no difference in the percentage of the total variance accounted for by the different components from the merged sample when compared with the individual sample. Optimum allocation was achieved when there was stratification
The problem of allocation with more than one characteristic in stratified sampling is conflicting in nature, as the best allocation for one characteristic will not in general be best for others. Some compromise must be reached to obtain an allocation that is efficient for all characteristics. In this study, the allocation of a sample to strata which minimizes cost of investigation, subject to a given condition about the sampling error was considered. The data on four socioeconomic characteristics of 400 heads of households in Abeokuta South and Ijebu North Local Government Areas (LGAs) of Ogun State, Nigeria were investigated. These comprised of 200 households from each LGA. The characteristics were occupation, income, household size and educational level. Optimal allocation in multi-item was developed as a multivariate optimization problem by finding the principal components. This was done by determining the overall linear combinations that concentrates the variability into few variables. From the principal component analysis, it was seen that for both Abeokuta and Ijebu data sets, the variance based on the four characteristics as multivariate is less than that of the variables when considered as a univariate. From the results, it was seen that there was no difference in the percentage of the total variance accounted for by the different components from the merged sample when compared with the individual sample. Optimum allocation was achieved when there was stratification
Optimum Allocation of Multi-Items in Stratified Random Sampling Using Principal Component Analysis
doi:10.11648/j.ajtas.20130205.14
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
Apantaku Fadeke Sola.
Olayiwola Olaniyi Mathew
Adewara Amos Adedayo
Optimum Allocation of Multi-Items in Stratified Random Sampling Using Principal Component Analysis
2
5
148
148
2014-01-01
2014-01-01
10.11648/j.ajtas.20130205.14
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130205.14
© Science Publishing Group
Efficiency of Neyman Allocation Procedure over other Allocation Procedures in Stratified Random Sampling
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130205.12
In sampling, we have interest in precision and in order to create the precision, we make use of prior knowledge of the population. We try to put the population into series of homogeneous groups and by this, the precision will be increased. When the population of interest can be divided into k homogeneous groups and the sample of observation is taken from each group, we have a stratified random sample and each group is called a stratum. The study was therefore designed to investigate the efficiency of Neyman allocation procedure over equal and proportional allocations. The data used for this research were primary data collected from ten Markets in Abeokuta, Ogun State, Nigeria on the prices of Peak Milk (Nigeria made). A stratified random sampling scheme was used in selecting 10 markets in Abeokuta, Ogun State, Nigeria. Each market stands as a stratum. From each stratum, independent sample was selected randomly based on equal, proportional and Neyman/Optimum allocation procedures. Statistic was obtained from each stratum and combined estimate of the separate statistic was also obtained for each of the allocation procedure. Considering the analysis and estimates obtained, the mean and variance under Neyman allocation procedure were 1356.672 and 21.45 respectively. For proportional allocation, the mean was 1349.3069 and variance was 38.98 while equal allocation gave mean of 1352 and variance of 170.3238. Neyman/Optimum allocation procedure gave the least variance. This was followed by Proportional allocation and Equal allocation. Neyman allocation procedure is the best selection procedure. Hence, for estimating the average and the variance of the prices of Peak Milk (Nigeria Made) in the markets in Abeokuta, of all the three sample allocation procedures considered in this paper, Neyman allocation procedure is the best and hence the most efficient.
In sampling, we have interest in precision and in order to create the precision, we make use of prior knowledge of the population. We try to put the population into series of homogeneous groups and by this, the precision will be increased. When the population of interest can be divided into k homogeneous groups and the sample of observation is taken from each group, we have a stratified random sample and each group is called a stratum. The study was therefore designed to investigate the efficiency of Neyman allocation procedure over equal and proportional allocations. The data used for this research were primary data collected from ten Markets in Abeokuta, Ogun State, Nigeria on the prices of Peak Milk (Nigeria made). A stratified random sampling scheme was used in selecting 10 markets in Abeokuta, Ogun State, Nigeria. Each market stands as a stratum. From each stratum, independent sample was selected randomly based on equal, proportional and Neyman/Optimum allocation procedures. Statistic was obtained from each stratum and combined estimate of the separate statistic was also obtained for each of the allocation procedure. Considering the analysis and estimates obtained, the mean and variance under Neyman allocation procedure were 1356.672 and 21.45 respectively. For proportional allocation, the mean was 1349.3069 and variance was 38.98 while equal allocation gave mean of 1352 and variance of 170.3238. Neyman/Optimum allocation procedure gave the least variance. This was followed by Proportional allocation and Equal allocation. Neyman allocation procedure is the best selection procedure. Hence, for estimating the average and the variance of the prices of Peak Milk (Nigeria Made) in the markets in Abeokuta, of all the three sample allocation procedures considered in this paper, Neyman allocation procedure is the best and hence the most efficient.
Efficiency of Neyman Allocation Procedure over other Allocation Procedures in Stratified Random Sampling
doi:10.11648/j.ajtas.20130205.12
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
Olayiwola Olaniyi Mathew
Apantaku Fadeke Sola
Bisira Hammed Oladiran
Adewara Adedayo Amos
Efficiency of Neyman Allocation Procedure over other Allocation Procedures in Stratified Random Sampling
2
5
127
127
2014-01-01
2014-01-01
10.11648/j.ajtas.20130205.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130205.12
© Science Publishing Group
Bayesian Estimation Using MCMC Approach Based on Progressive First-Failure Censoring from Generalized Pareto Distribution
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130205.13
In this paper, based on a new type of censoring scheme called a progressive first-failure censored, the maximum likelihood (ML) and the Bayes estimators for the two unknown parameters of the Generalized Pareto (GP) distribution are derived. This type of censoring contains as special cases various types of censoring schemes used in the literature. A Bayesian approach using Markov Chain Monte Carlo (MCMC) method to generate from the posterior distributions and in turn computing the Bayes estimators are developed. Point estimation and confidence intervals based on maximum likelihood and bootstrap methods are also proposed. The approximate Bayes estimators have been obtained under the assumptions of informative and non-informative priors. A numerical example is provided to illustrate the proposed methods. Finally, the maximum likelihood and different Bayes estimators are compared via a Monte Carlo simulation study.
In this paper, based on a new type of censoring scheme called a progressive first-failure censored, the maximum likelihood (ML) and the Bayes estimators for the two unknown parameters of the Generalized Pareto (GP) distribution are derived. This type of censoring contains as special cases various types of censoring schemes used in the literature. A Bayesian approach using Markov Chain Monte Carlo (MCMC) method to generate from the posterior distributions and in turn computing the Bayes estimators are developed. Point estimation and confidence intervals based on maximum likelihood and bootstrap methods are also proposed. The approximate Bayes estimators have been obtained under the assumptions of informative and non-informative priors. A numerical example is provided to illustrate the proposed methods. Finally, the maximum likelihood and different Bayes estimators are compared via a Monte Carlo simulation study.
Bayesian Estimation Using MCMC Approach Based on Progressive First-Failure Censoring from Generalized Pareto Distribution
doi:10.11648/j.ajtas.20130205.13
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
Mohamed Abdul Wahab Mahmoud
Ahmed Abo-Elmagd Soliman
Ahmed Hamed Abd Ellah
Rashad Mohamed El-Sagheer
Bayesian Estimation Using MCMC Approach Based on Progressive First-Failure Censoring from Generalized Pareto Distribution
2
5
141
141
2014-01-01
2014-01-01
10.11648/j.ajtas.20130205.13
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130205.13
© Science Publishing Group
Parameters Estimation Based on Progressively Censored Data from Inverse Weibull Distribution
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130206.11
In this article, our main aim is to investigate the parameters estimation of inverse Weibull distribution in the frame work of progressively type II. We consider the censored sample from a two parameters inverse Weibull. The point estimators of the parameters derived by using the maximum likelihood method. The exact joint confidence region and confidence interval for the parameters are obtained. A numerical example is provided to illustrate the proposed. estimation methods developed here.
In this article, our main aim is to investigate the parameters estimation of inverse Weibull distribution in the frame work of progressively type II. We consider the censored sample from a two parameters inverse Weibull. The point estimators of the parameters derived by using the maximum likelihood method. The exact joint confidence region and confidence interval for the parameters are obtained. A numerical example is provided to illustrate the proposed. estimation methods developed here.
Parameters Estimation Based on Progressively Censored Data from Inverse Weibull Distribution
doi:10.11648/j.ajtas.20130206.11
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
Mostafa M. MohieEl-Din
Fathy H. Riad
Mohamed A. El-Sayed
Parameters Estimation Based on Progressively Censored Data from Inverse Weibull Distribution
2
6
153
153
2014-01-01
2014-01-01
10.11648/j.ajtas.20130206.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130206.11
© Science Publishing Group
Estimate of Subject Specific Index of Relative Performance in ‘K’ Samples
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130206.12
This paper proposes and develops a statistic here termed the ‘relative performance index’ or the index of relative performance’ by subjects both within and between several sampled populations for preferentially rank-ordering subjects by their relative performance in comparison with other subjects from these populations involved in a test or contest. The proposed index would enable decisions on the preferential selection of subjects both within and between various classifications for management purposes. The proposed method enables the estimation of the median and other tiles of not only each of the sampled populations but also the common median of the several populations as functions of the relative performance indices. The method unlike some other methods used for the analysis of many samples is based mostly on individual subjects rather than on only summary indices or averages. Test statistics also based on subject specific relative performance indices are developed to test desired hypothesis concerning population. The proposed indices being subject specific rather than merely summary averages easily enables one to more clearly and succinctly examine individual subjects relative performance or level of seriousness in a condition in comparison with other subjects from the sampled populations thereby providing subject targeted information to better guide any interventionist actions on a condition of research interest. The method is illustrated with some data and shown to compare favorably with some existing methods.
This paper proposes and develops a statistic here termed the ‘relative performance index’ or the index of relative performance’ by subjects both within and between several sampled populations for preferentially rank-ordering subjects by their relative performance in comparison with other subjects from these populations involved in a test or contest. The proposed index would enable decisions on the preferential selection of subjects both within and between various classifications for management purposes. The proposed method enables the estimation of the median and other tiles of not only each of the sampled populations but also the common median of the several populations as functions of the relative performance indices. The method unlike some other methods used for the analysis of many samples is based mostly on individual subjects rather than on only summary indices or averages. Test statistics also based on subject specific relative performance indices are developed to test desired hypothesis concerning population. The proposed indices being subject specific rather than merely summary averages easily enables one to more clearly and succinctly examine individual subjects relative performance or level of seriousness in a condition in comparison with other subjects from the sampled populations thereby providing subject targeted information to better guide any interventionist actions on a condition of research interest. The method is illustrated with some data and shown to compare favorably with some existing methods.
Estimate of Subject Specific Index of Relative Performance in ‘K’ Samples
doi:10.11648/j.ajtas.20130206.12
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
Oyeka Ikewelugo Cyprian Anaene
Okeh Uchechukwu Marius
Estimate of Subject Specific Index of Relative Performance in ‘K’ Samples
2
6
165
165
2014-01-01
2014-01-01
10.11648/j.ajtas.20130206.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130206.12
© Science Publishing Group
Method of Principal Factors Estimation of Optimal Number of Factors: An Information Criteria Approach
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130206.13
The issue of the number of factors to be retained in a factor analysis has been an undefined. Be that as it may, this paper tries to x-ray the number of factors (k) to be retained in a factor analysis for different sample sizes using the method of Principal Factor estimation when the number of variables are ten (10). Stimulated data were used for sample sizes of 30, 50 and 70 and the Akaike Information Criterion (AIC), the Schwarz Information Criterion (SIC) and the Hannan Quinne Information Criterion (HQIC) values were obtained when the number of factors(k) are two, three, and five (2,3 and 5). It was discorvered that the AIC, SIC, and HQIC values are smallest when k = 5, and highest when k = 2 for the sample sizes of 30 and 70. But, for a sample of 50, the values of these information criteria is smallest for k = 3, highest for k=5. Hence, conclusion is drawn that for the sample sizes of 30 and 70, the optimal number of factors to retain is 5 and 3 for the sample size of 70. This implies that, the number of factors to retain is a function of the sample size of the data.
The issue of the number of factors to be retained in a factor analysis has been an undefined. Be that as it may, this paper tries to x-ray the number of factors (k) to be retained in a factor analysis for different sample sizes using the method of Principal Factor estimation when the number of variables are ten (10). Stimulated data were used for sample sizes of 30, 50 and 70 and the Akaike Information Criterion (AIC), the Schwarz Information Criterion (SIC) and the Hannan Quinne Information Criterion (HQIC) values were obtained when the number of factors(k) are two, three, and five (2,3 and 5). It was discorvered that the AIC, SIC, and HQIC values are smallest when k = 5, and highest when k = 2 for the sample sizes of 30 and 70. But, for a sample of 50, the values of these information criteria is smallest for k = 3, highest for k=5. Hence, conclusion is drawn that for the sample sizes of 30 and 70, the optimal number of factors to retain is 5 and 3 for the sample size of 70. This implies that, the number of factors to retain is a function of the sample size of the data.
Method of Principal Factors Estimation of Optimal Number of Factors: An Information Criteria Approach
doi:10.11648/j.ajtas.20130206.13
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
Nwosu Dozie. F.
Onyeagu Sidney. I.
Osuji George A.
Method of Principal Factors Estimation of Optimal Number of Factors: An Information Criteria Approach
2
6
175
175
2014-01-01
2014-01-01
10.11648/j.ajtas.20130206.13
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130206.13
© Science Publishing Group
Application of Multivariate Methods for Assessment of Variations in Rivers/Streams Water Quality in Niger State, Nigeria
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130206.14
Multivariate statistical methods, Cluster Analysis (CA) and Canonical Discriminant Analysis (CDA) were applied to assess the temporal and spatial variations, and identify pollution sources in some rivers/streams of Niger State in Nigeria. Sixteen towns were sampled as medium-sized towns in which data were gathered on four physical, eleven chemical and two microbial parameters of water. Hierarchical CA grouped the sixteen sampled sites into four main seasonal clusters and three main groups of similar water quality. Stepwise selection for the temporal Discriminant Analysis (DA) identified the most significant parameters for discriminating between the four seasons as magnesium, Escherichia coli, total coliform, total dissolved solid (TDS) and total hardness with 83.3% apparent correct classification. The stepwise selection for the spatial Discriminant Analysis (DA) show that, Escherichia coli and magnesium is more prevalent in winter; while Escherichia coli and total dissolved solid (TDS) is higher in spring; and Escherichia coli and total coliform were more in summer and autumn with 94% total success rate of classification. The outcome of this study also show that the sources of water in groups one and two were more polluted than group three during summer and autumn than in the winter and spring. Based on these findings, it is recommended that the frequency of monitoring sites could be reduced to only sites in groups one and two while the seasons could be based on summer and autumn.
Multivariate statistical methods, Cluster Analysis (CA) and Canonical Discriminant Analysis (CDA) were applied to assess the temporal and spatial variations, and identify pollution sources in some rivers/streams of Niger State in Nigeria. Sixteen towns were sampled as medium-sized towns in which data were gathered on four physical, eleven chemical and two microbial parameters of water. Hierarchical CA grouped the sixteen sampled sites into four main seasonal clusters and three main groups of similar water quality. Stepwise selection for the temporal Discriminant Analysis (DA) identified the most significant parameters for discriminating between the four seasons as magnesium, Escherichia coli, total coliform, total dissolved solid (TDS) and total hardness with 83.3% apparent correct classification. The stepwise selection for the spatial Discriminant Analysis (DA) show that, Escherichia coli and magnesium is more prevalent in winter; while Escherichia coli and total dissolved solid (TDS) is higher in spring; and Escherichia coli and total coliform were more in summer and autumn with 94% total success rate of classification. The outcome of this study also show that the sources of water in groups one and two were more polluted than group three during summer and autumn than in the winter and spring. Based on these findings, it is recommended that the frequency of monitoring sites could be reduced to only sites in groups one and two while the seasons could be based on summer and autumn.
Application of Multivariate Methods for Assessment of Variations in Rivers/Streams Water Quality in Niger State, Nigeria
doi:10.11648/j.ajtas.20130206.14
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
Audu Isah
Usman Abdullahi
Muhammed Muhammed Ndamitso
Application of Multivariate Methods for Assessment of Variations in Rivers/Streams Water Quality in Niger State, Nigeria
2
6
183
183
2014-01-01
2014-01-01
10.11648/j.ajtas.20130206.14
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130206.14
© Science Publishing Group
Method of Maximum Likelihood Estimation of Optimal Number of Factors: An Information Criteria Approach
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130206.16
Method of Maximum Likelihood Estimation of Optimal Number of Factors: An Information Criteria Approach
doi:10.11648/j.ajtas.20130206.16
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
Nwosu F. Dozie
Onyeagu, Sidney I.
Osuji, George A.
Ekezie Dan Dan
Method of Maximum Likelihood Estimation of Optimal Number of Factors: An Information Criteria Approach
2
6
201
201
2014-01-01
2014-01-01
10.11648/j.ajtas.20130206.16
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130206.16
© Science Publishing Group
Assessing the Knowledge, Attitude and Factors Affecting Team Building Amongst Health Workers in Nigeria Using the Permutation Method for Hotelling T-squared Analysis
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130206.15
This study is on assessing the knowledge, attitude and factors affecting team building activities amongst health workers in Nigeria. The objective of this study is to determine the equality on knowledge and attitude of team building activities between health workers of Anambra state and Delta State. The source of data for this study was questionnaire, administered randomly to a sample of 200 workers at Anambra State and a sample of 305 workers at the Delta State. The test statistic used was the permutation method for Hotelling T –squared. The result of the analysis showed that there exist a significant difference on knowledge, attitude and factors affecting team building in Anambra State and Delta State with a test statistic value of 8073.7 and a p-value of 0.00 for 10, 000 permutations. This result indicate that the obtained significant value of 0.00 fall on the rejection region of the hypothesis assuming a significance level of 5% (α = 0.05) and implies that there is significant difference on the knowledge, attitude and factors affecting team building in the two States. Suggestion for the management of health in Anambra State to encourage the practice of team building by either sponsoring training of staff on team building or organizing seminars that will enhance the practice of team building was made since as argued in the present study that the benefit of team building in any organization is to achieves cohesiveness, improve the team attitude, effectiveness and enhances productivity. Studies on assessing the knowledge, attitude and factors affecting team building in other professional sector such as education, finance and environment in Nigeria is seen as an area for future research.
This study is on assessing the knowledge, attitude and factors affecting team building activities amongst health workers in Nigeria. The objective of this study is to determine the equality on knowledge and attitude of team building activities between health workers of Anambra state and Delta State. The source of data for this study was questionnaire, administered randomly to a sample of 200 workers at Anambra State and a sample of 305 workers at the Delta State. The test statistic used was the permutation method for Hotelling T –squared. The result of the analysis showed that there exist a significant difference on knowledge, attitude and factors affecting team building in Anambra State and Delta State with a test statistic value of 8073.7 and a p-value of 0.00 for 10, 000 permutations. This result indicate that the obtained significant value of 0.00 fall on the rejection region of the hypothesis assuming a significance level of 5% (α = 0.05) and implies that there is significant difference on the knowledge, attitude and factors affecting team building in the two States. Suggestion for the management of health in Anambra State to encourage the practice of team building by either sponsoring training of staff on team building or organizing seminars that will enhance the practice of team building was made since as argued in the present study that the benefit of team building in any organization is to achieves cohesiveness, improve the team attitude, effectiveness and enhances productivity. Studies on assessing the knowledge, attitude and factors affecting team building in other professional sector such as education, finance and environment in Nigeria is seen as an area for future research.
Assessing the Knowledge, Attitude and Factors Affecting Team Building Amongst Health Workers in Nigeria Using the Permutation Method for Hotelling T-squared Analysis
doi:10.11648/j.ajtas.20130206.15
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
Aronu, C. O.
Bilesanmi, A. O.
Assessing the Knowledge, Attitude and Factors Affecting Team Building Amongst Health Workers in Nigeria Using the Permutation Method for Hotelling T-squared Analysis
2
6
190
190
2014-01-01
2014-01-01
10.11648/j.ajtas.20130206.15
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130206.15
© Science Publishing Group
Intrinsically Ties Adjusted Tau (C-Tat) Correlation Coefficient
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130206.26
This paper proposes a method for correcting and adjusting the usual or regular estimates of Tau correlation coefficients for the possibility of ties within and between observations in the population being correlated. The index here called C-Tat for ‘ties adjusted Tau correlation coefficient’ is formulated to intrinsically and structurally adjust and correct the estimated Tau correlation coefficient for the possible presence of tied observations in the sampled populations and for the fact that the estimates obtained are often dependent on, that is, vary depending on which of the two populations under study has its assigned ranks arranged in their natural order and which has its assigned ranks arranged in their natural order and which has its assigned ranks tagged along. The proposed method is illustrated with some sample data and shown to yield more reliable and efficient estimates of tau correlation coefficients than the usual method which is able to give the same estimates only if there are no tied observations what-so-ever in the sampled populations.
This paper proposes a method for correcting and adjusting the usual or regular estimates of Tau correlation coefficients for the possibility of ties within and between observations in the population being correlated. The index here called C-Tat for ‘ties adjusted Tau correlation coefficient’ is formulated to intrinsically and structurally adjust and correct the estimated Tau correlation coefficient for the possible presence of tied observations in the sampled populations and for the fact that the estimates obtained are often dependent on, that is, vary depending on which of the two populations under study has its assigned ranks arranged in their natural order and which has its assigned ranks arranged in their natural order and which has its assigned ranks tagged along. The proposed method is illustrated with some sample data and shown to yield more reliable and efficient estimates of tau correlation coefficients than the usual method which is able to give the same estimates only if there are no tied observations what-so-ever in the sampled populations.
Intrinsically Ties Adjusted Tau (C-Tat) Correlation Coefficient
doi:10.11648/j.ajtas.20130206.26
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
OYEKA CYPRIL ANENE
OSUJI GEORGE AMAEZE
NWANKWO CHRISTIAN CHUKWUEMEKA
Intrinsically Ties Adjusted Tau (C-Tat) Correlation Coefficient
2
6
281
281
2014-01-01
2014-01-01
10.11648/j.ajtas.20130206.26
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130206.26
© Science Publishing Group
Derivation of Inflection Points of Nonlinear Regression Curves - Implications to Statistics
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130206.25
In this paper, we derive inflection points for the commonly known growth curves, namely, generalized logistic, Richards, Von Bertalanffy, Brody, logistic, Gompertz, generalized Weibull, Weibull, Monomolecular and Mitscherlich functions. The functions often represent the mean part of non-linear regression models in Statistics. Inflection point of a growth curve is the point on the curve at which the rate of growth gets maximum value and it represents an important physical interpretation in the respective application area. Not only the model parameters but also the inflection point of a growth curve is of high statistical interests.
In this paper, we derive inflection points for the commonly known growth curves, namely, generalized logistic, Richards, Von Bertalanffy, Brody, logistic, Gompertz, generalized Weibull, Weibull, Monomolecular and Mitscherlich functions. The functions often represent the mean part of non-linear regression models in Statistics. Inflection point of a growth curve is the point on the curve at which the rate of growth gets maximum value and it represents an important physical interpretation in the respective application area. Not only the model parameters but also the inflection point of a growth curve is of high statistical interests.
Derivation of Inflection Points of Nonlinear Regression Curves - Implications to Statistics
doi:10.11648/j.ajtas.20130206.25
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
Ayele Taye Goshu
Purnachandra Rao Koya
Derivation of Inflection Points of Nonlinear Regression Curves - Implications to Statistics
2
6
272
272
2014-01-01
2014-01-01
10.11648/j.ajtas.20130206.25
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130206.25
© Science Publishing Group
Graphical and Queuing Model of Banking Operations in Intercontinental Bank Plc, Nigeria
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130206.27
This study is aimed at modeling the influence and network relationship existing amongst members of staff of commercial banks. It also solved the problem of profit maximization through staff reduction in a commercial banking system. In this paper, graph and queuing theories were applied in achieving the smooth running of the operations in a commercial bank using Intercontinental bank plc as a case study.
This study is aimed at modeling the influence and network relationship existing amongst members of staff of commercial banks. It also solved the problem of profit maximization through staff reduction in a commercial banking system. In this paper, graph and queuing theories were applied in achieving the smooth running of the operations in a commercial bank using Intercontinental bank plc as a case study.
Graphical and Queuing Model of Banking Operations in Intercontinental Bank Plc, Nigeria
doi:10.11648/j.ajtas.20130206.27
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
Hycinth Chukwudi Iwu
Chukwudi Justin Ogbonna
Opara Jude
Graphical and Queuing Model of Banking Operations in Intercontinental Bank Plc, Nigeria
2
6
292
292
2014-01-01
2014-01-01
10.11648/j.ajtas.20130206.27
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130206.27
© Science Publishing Group
Comparative Study on Forecasting Accuracy among Moving Average Models with Simulation and PALTEL Stock Market Data in Palestine
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130206.17
In this paper, we discuss three analytical time series models for selecting the more effective with an accurate forecasting models, among others. We analytically modify the stochastic realization utilizing (i) k-th moving average, (ii) k-th weighted moving average, and (iii) k-th exponential weighted moving average processes. The examining methods have been applied for 1000 independent datasets for five different parameters with possible orders . We consider stationary data , and non-stationary data with first and second differences for ARIMA models. We consider short term and long term, observations. A similar forecasting models was developed and evaluated for the daily closing price of Stock Price of the PALTEL company in Palestine. The main finding is that, in most simulated datasets one or more of the proposed models give better forecast accuracy than the classical model (ARIMA). Specially, in most simulated datasets 3– time Exponential Weighted Moving Average based on Autoregressive Integrated Moving Average (EWMA3-ARIMA) is the best forecasting model among all other models. For PALTEL Stock Price, the best forecasting model is 3–time Moving Average based on Autoregressive Integrated Moving Average (MA3-ARIMA) among all other models.
In this paper, we discuss three analytical time series models for selecting the more effective with an accurate forecasting models, among others. We analytically modify the stochastic realization utilizing (i) k-th moving average, (ii) k-th weighted moving average, and (iii) k-th exponential weighted moving average processes. The examining methods have been applied for 1000 independent datasets for five different parameters with possible orders . We consider stationary data , and non-stationary data with first and second differences for ARIMA models. We consider short term and long term, observations. A similar forecasting models was developed and evaluated for the daily closing price of Stock Price of the PALTEL company in Palestine. The main finding is that, in most simulated datasets one or more of the proposed models give better forecast accuracy than the classical model (ARIMA). Specially, in most simulated datasets 3– time Exponential Weighted Moving Average based on Autoregressive Integrated Moving Average (EWMA3-ARIMA) is the best forecasting model among all other models. For PALTEL Stock Price, the best forecasting model is 3–time Moving Average based on Autoregressive Integrated Moving Average (MA3-ARIMA) among all other models.
Comparative Study on Forecasting Accuracy among Moving Average Models with Simulation and PALTEL Stock Market Data in Palestine
doi:10.11648/j.ajtas.20130206.17
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
Samir K. Safi
Issam A. Dawoud
Comparative Study on Forecasting Accuracy among Moving Average Models with Simulation and PALTEL Stock Market Data in Palestine
2
6
209
209
2014-01-01
2014-01-01
10.11648/j.ajtas.20130206.17
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130206.17
© Science Publishing Group
Estimating the Fisher’s Scoring Matrix Formula from Logistic Model
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130206.19
This paper proposes a matrix approach to estimating parameters of logistic regression with a view to estimating the effects of risk factors of gestational diabetic mellitus (GDM). The proposed method of maximum likelihood estimation (MLE) unlike other methods of estimating parameters of non-linear regression is simpler and convergence of parameters is quicker. The odds ratio obtained from the logistic regression were used to interpret the effects of these risk factors on GDM where obesity and F.H as risk factors, were positively associated with GDM. The proposed method was seen to compare favorably with other known methods.
This paper proposes a matrix approach to estimating parameters of logistic regression with a view to estimating the effects of risk factors of gestational diabetic mellitus (GDM). The proposed method of maximum likelihood estimation (MLE) unlike other methods of estimating parameters of non-linear regression is simpler and convergence of parameters is quicker. The odds ratio obtained from the logistic regression were used to interpret the effects of these risk factors on GDM where obesity and F.H as risk factors, were positively associated with GDM. The proposed method was seen to compare favorably with other known methods.
Estimating the Fisher’s Scoring Matrix Formula from Logistic Model
doi:10.11648/j.ajtas.20130206.19
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
Okeh UM
Oyeka I. C. A.
Estimating the Fisher’s Scoring Matrix Formula from Logistic Model
2
6
227
227
2014-01-01
2014-01-01
10.11648/j.ajtas.20130206.19
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130206.19
© Science Publishing Group
Multivariate Analysis of Data on the Effects of Different Vegetative Covers on Some Physical Properties of a Selected Nigerian Soil
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130206.18
This work discusses the use of multivariate analysis of variance to tackle the problem of investigating the effects of different vegetative covers on the physical properties of a selected Nigerian soil. An additive effect model was assumed and using data obtained from the Department of Soil Science of the University of Nigeria, Nsukka, we tested for the equality of significant treatment effects. The result of the analysis revealed that the treatment effects were found to be significant being acceptable at 5 percent level. Based on our analysis, we recommended that the vegetative covers in question are useful and necessary and therefore should be used to improve the soil physical conditions for any overused land in Nsukka area of Nigeria and similar soils elsewhere not in Nsukka depending on the use and type of farming system the land is put to.
This work discusses the use of multivariate analysis of variance to tackle the problem of investigating the effects of different vegetative covers on the physical properties of a selected Nigerian soil. An additive effect model was assumed and using data obtained from the Department of Soil Science of the University of Nigeria, Nsukka, we tested for the equality of significant treatment effects. The result of the analysis revealed that the treatment effects were found to be significant being acceptable at 5 percent level. Based on our analysis, we recommended that the vegetative covers in question are useful and necessary and therefore should be used to improve the soil physical conditions for any overused land in Nsukka area of Nigeria and similar soils elsewhere not in Nsukka depending on the use and type of farming system the land is put to.
Multivariate Analysis of Data on the Effects of Different Vegetative Covers on Some Physical Properties of a Selected Nigerian Soil
doi:10.11648/j.ajtas.20130206.18
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
CHUKWUDI JUSTIN OGBONNA
OPARA JUDE
HYCINTH CHUKWUDI IWU
Multivariate Analysis of Data on the Effects of Different Vegetative Covers on Some Physical Properties of a Selected Nigerian Soil
2
6
220
220
2014-01-01
2014-01-01
10.11648/j.ajtas.20130206.18
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130206.18
© Science Publishing Group
A Comparative Study between Ridit and Modified Ridit Analysis
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130206.23
This paper compares ridit analysis with modified ridit analysis. The comparison was then illustrated with an example. It was observed from the example at least, that when the sample sizes of the two samples being compared are too disparate, a more reliable conclusion using the Bross ridit analysis is likely to be reached only when the group with the larger sample size is used as the reference group. Otherwise Bross ridit analysis would lead to conflicting conclusions, depending on which group is used as the reference group. Modified ridit analysis treats the groups being studied as samples drawn from some larger populations in which the variances or standard deviations as well as the results obtained are the same no matter which sample is used as the reference group. The modified procedure is therefore preferable to ridit analysis especially in cases where the groups being compared are samples from some populations.
This paper compares ridit analysis with modified ridit analysis. The comparison was then illustrated with an example. It was observed from the example at least, that when the sample sizes of the two samples being compared are too disparate, a more reliable conclusion using the Bross ridit analysis is likely to be reached only when the group with the larger sample size is used as the reference group. Otherwise Bross ridit analysis would lead to conflicting conclusions, depending on which group is used as the reference group. Modified ridit analysis treats the groups being studied as samples drawn from some larger populations in which the variances or standard deviations as well as the results obtained are the same no matter which sample is used as the reference group. The modified procedure is therefore preferable to ridit analysis especially in cases where the groups being compared are samples from some populations.
A Comparative Study between Ridit and Modified Ridit Analysis
doi:10.11648/j.ajtas.20130206.23
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
Ebuh Godday Uwawunkonye
Oyeka Ikewelugo Cyprian Anaene
A Comparative Study between Ridit and Modified Ridit Analysis
2
6
254
254
2014-01-01
2014-01-01
10.11648/j.ajtas.20130206.23
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130206.23
© Science Publishing Group
Statistical Analysis of Life-Time and Temperature of Black Holes
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130206.20
In the present article, statistical analysis of lifetime and temperature of the black holes have been studied existing in X-ray binaries and active galactic nuclei with the help of formula Г = (M/Mʘ)3 x 1066 years and T=hc3/8πkGM Kelvin respectively where M stands for the mass of black hole and other parameters have their usual meaning.
In the present article, statistical analysis of lifetime and temperature of the black holes have been studied existing in X-ray binaries and active galactic nuclei with the help of formula Г = (M/Mʘ)3 x 1066 years and T=hc3/8πkGM Kelvin respectively where M stands for the mass of black hole and other parameters have their usual meaning.
Statistical Analysis of Life-Time and Temperature of Black Holes
doi:10.11648/j.ajtas.20130206.20
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
Ved Prakash
Dipo Mahto
Ashok Kumar
Basant Kumar Das
Statistical Analysis of Life-Time and Temperature of Black Holes
2
6
232
232
2014-01-01
2014-01-01
10.11648/j.ajtas.20130206.20
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130206.20
© Science Publishing Group
Comparison of Statistical Methods for Outlier Detection in Proficiency Testing Data on Analysis of Lead in Aqueous Solution
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130206.21
Proficiency testing is the regular testing of the performance of individual laboratories by an external agency. Stable and effectively homogeneous elemental solutions of different concentrations ranging were prepared at NPL and were certified by round robin test. These certified reference materials (CRMs) along with reports containing all information, where the laboratories were identified only by a reference number were being distributed to sixty seven participant laboratories. In this paper the data of lead (Pb) elemental solution of 1-5 mg/l is only presented for outlier detection. The results received from the sixty four numbers of laboratories for Pb elemental solution statistically evaluated with different approaches viz. Cochran’s test, Grubb’s test, Hampel’s test, classical z-score, median and NIQR method, robust statistical analysis : Algorithm A (ISO 13528) and NATA method. The robust estimate of average and uncertainty values derived from ISO 13528 method are very close to the reference value for the 1 and 2 mg/l of Pb elemental solutions. The performance of the laboratories was expressed by z-score and the laboratories having z< 2 are classified as satisfactory, 2 < z < 3 are classified as questionable and z> 3 are considered as unsatisfactory. Among all the methods, the highest number of outlier i.e. about 30 % obtained by NATA statistical analysis. As NATA method considers variance in both within and between laboratory results so it seems to be the most suitable method for outlier detection for the present data set evaluated in this study.
Proficiency testing is the regular testing of the performance of individual laboratories by an external agency. Stable and effectively homogeneous elemental solutions of different concentrations ranging were prepared at NPL and were certified by round robin test. These certified reference materials (CRMs) along with reports containing all information, where the laboratories were identified only by a reference number were being distributed to sixty seven participant laboratories. In this paper the data of lead (Pb) elemental solution of 1-5 mg/l is only presented for outlier detection. The results received from the sixty four numbers of laboratories for Pb elemental solution statistically evaluated with different approaches viz. Cochran’s test, Grubb’s test, Hampel’s test, classical z-score, median and NIQR method, robust statistical analysis : Algorithm A (ISO 13528) and NATA method. The robust estimate of average and uncertainty values derived from ISO 13528 method are very close to the reference value for the 1 and 2 mg/l of Pb elemental solutions. The performance of the laboratories was expressed by z-score and the laboratories having z< 2 are classified as satisfactory, 2 < z < 3 are classified as questionable and z> 3 are considered as unsatisfactory. Among all the methods, the highest number of outlier i.e. about 30 % obtained by NATA statistical analysis. As NATA method considers variance in both within and between laboratory results so it seems to be the most suitable method for outlier detection for the present data set evaluated in this study.
Comparison of Statistical Methods for Outlier Detection in Proficiency Testing Data on Analysis of Lead in Aqueous Solution
doi:10.11648/j.ajtas.20130206.21
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
Sushree Swarupa Tripathy
Rajiv Kumar Saxena
Prabhat Kumar Gupta
Comparison of Statistical Methods for Outlier Detection in Proficiency Testing Data on Analysis of Lead in Aqueous Solution
2
6
242
242
2014-01-01
2014-01-01
10.11648/j.ajtas.20130206.21
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130206.21
© Science Publishing Group
Phenol Removal via Advanced Oxidative Processes (O3/Photo-Fenton) and Chemometrics
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130206.22
Taguchi’s L16 orthogonal array in the study of removal of total phenols using advanced oxidation processes was efficient. In the statistical assessment, the input variables obtained as significant through ANOVA were: hydrogen peroxide (F = 10.0924 and p-value = 5.02) and ozone (F = 3.8686 and p-value = 14.39). In this assessment, the best experimental condition for the removal of total phenols is 100%. The following factors must therefore be adjusted: hydrogen peroxide = 38, 7g and ozone flow = 3 L/h.
Taguchi’s L16 orthogonal array in the study of removal of total phenols using advanced oxidation processes was efficient. In the statistical assessment, the input variables obtained as significant through ANOVA were: hydrogen peroxide (F = 10.0924 and p-value = 5.02) and ozone (F = 3.8686 and p-value = 14.39). In this assessment, the best experimental condition for the removal of total phenols is 100%. The following factors must therefore be adjusted: hydrogen peroxide = 38, 7g and ozone flow = 3 L/h.
Phenol Removal via Advanced Oxidative Processes (O3/Photo-Fenton) and Chemometrics
doi:10.11648/j.ajtas.20130206.22
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
Ana Paula Barbosa Rodrigues de Freitas
Leandro Valim de Freitas
Hélcio José Izário Filho
Messias Borges Silva
Phenol Removal via Advanced Oxidative Processes (O3/Photo-Fenton) and Chemometrics
2
6
247
247
2014-01-01
2014-01-01
10.11648/j.ajtas.20130206.22
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130206.22
© Science Publishing Group
Investigating Predictors of Examination Result Data Using Logistic Regression (A Case Study of Imo State Polytechnic, Umuagwo, Imo State, Nigeria)
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130206.24
This study tends to analyze the school examination results (scores) of 300 randomly selected students of Imo State Polytechnic, Umuagwo near Owerri, Imo State, Nigeria who offer English Language and Mathematics as general courses, using the binary logistic regression model with the aim of examining how some factors (variables) in secondary school level contribute to the performance of the students in the Polytechnic. The analysis is performed on the basis of the explanatory variables viz; gender, type of secondary schools, category of secondary schools, board of examinations and location of secondary schools, where scores of students in English Language and Mathematics are assumed to be the response variables. Applying the method of Correspondence Analysis revealed that there exist a significant correlation between board of examinations and location of schools, which made the analysis to be into two stages. The first stage is based on using English Language and Mathematics as a response variable with gender, type of secondary schools, category of secondary schools, and board of examinations as the explanatory variables. The second stage, on the other hand, English Language and Mathematics is the response variable, while gender, type of secondary schools, category of secondary schools, and location of schools are the explanatory variables. The odds ratio analysis compares the scores obtained in two examinations viz English language and Mathematics. The result of the analysis revealed that females are always showing best performances in Mathematics than English examination in all the two stages carried out in this paper. The study also showed that performances of students from girls’ schools are found to be the best in English Language course examination than those of students from boys; secondary schools. Furthermore, the study revealed that government schools always show better performance in English course examination than in Mathematics.
This study tends to analyze the school examination results (scores) of 300 randomly selected students of Imo State Polytechnic, Umuagwo near Owerri, Imo State, Nigeria who offer English Language and Mathematics as general courses, using the binary logistic regression model with the aim of examining how some factors (variables) in secondary school level contribute to the performance of the students in the Polytechnic. The analysis is performed on the basis of the explanatory variables viz; gender, type of secondary schools, category of secondary schools, board of examinations and location of secondary schools, where scores of students in English Language and Mathematics are assumed to be the response variables. Applying the method of Correspondence Analysis revealed that there exist a significant correlation between board of examinations and location of schools, which made the analysis to be into two stages. The first stage is based on using English Language and Mathematics as a response variable with gender, type of secondary schools, category of secondary schools, and board of examinations as the explanatory variables. The second stage, on the other hand, English Language and Mathematics is the response variable, while gender, type of secondary schools, category of secondary schools, and location of schools are the explanatory variables. The odds ratio analysis compares the scores obtained in two examinations viz English language and Mathematics. The result of the analysis revealed that females are always showing best performances in Mathematics than English examination in all the two stages carried out in this paper. The study also showed that performances of students from girls’ schools are found to be the best in English Language course examination than those of students from boys; secondary schools. Furthermore, the study revealed that government schools always show better performance in English course examination than in Mathematics.
Investigating Predictors of Examination Result Data Using Logistic Regression (A Case Study of Imo State Polytechnic, Umuagwo, Imo State, Nigeria)
doi:10.11648/j.ajtas.20130206.24
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
Opara Jude
Esemokumo Perewarebo Akpos
Iheagwara Andrew Ihuoma
Okenwe Idochi
OSUJI GEORGE A.
Investigating Predictors of Examination Result Data Using Logistic Regression (A Case Study of Imo State Polytechnic, Umuagwo, Imo State, Nigeria)
2
6
267
267
2014-01-01
2014-01-01
10.11648/j.ajtas.20130206.24
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130206.24
© Science Publishing Group
A New Criterion for Lag-Length Selection in Unit Root Tests
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130206.28
This paper examines lag selection problem in unit root tests which has become a major specification problem in empirical analysis of non-stationary time series data. It is known that the implementation of unit root tests requires the choice of optimal truncation lag for good power proper ties and it is equally unrealistic to assume that the true optimal truncation lag is known a prior to the practitioners and other applied researchers. Consequently, these users rely largely on the use of standard information criteria for selection of truncation lag in unit root tests. A number of previous studies have shown that these criteria have problem of over-specification of truncation lag-length leading to the well-known low power problem that is commonly associated with most unit root tests in the literature. This paper focuses on the problem of over-specification of truncation lag-length within the context of augmented Dickey-Fuller (ADF) and generalized least squares Dickey-Fuller (DF-GLS)unit root tests. In an attempt to address this lag selection problem, we propose a new criterion for the selection of truncation lag in unit root tests based on Koyck distributed lag model and we show that this new criterion avoids the problem of over-specification of truncationlag-length that is commonly associated with standard information criteria.
This paper examines lag selection problem in unit root tests which has become a major specification problem in empirical analysis of non-stationary time series data. It is known that the implementation of unit root tests requires the choice of optimal truncation lag for good power proper ties and it is equally unrealistic to assume that the true optimal truncation lag is known a prior to the practitioners and other applied researchers. Consequently, these users rely largely on the use of standard information criteria for selection of truncation lag in unit root tests. A number of previous studies have shown that these criteria have problem of over-specification of truncation lag-length leading to the well-known low power problem that is commonly associated with most unit root tests in the literature. This paper focuses on the problem of over-specification of truncation lag-length within the context of augmented Dickey-Fuller (ADF) and generalized least squares Dickey-Fuller (DF-GLS)unit root tests. In an attempt to address this lag selection problem, we propose a new criterion for the selection of truncation lag in unit root tests based on Koyck distributed lag model and we show that this new criterion avoids the problem of over-specification of truncationlag-length that is commonly associated with standard information criteria.
A New Criterion for Lag-Length Selection in Unit Root Tests
doi:10.11648/j.ajtas.20130206.28
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
Agunloye, Oluokun Kasali
Arnab, Raghunath
Shangodoyin, Dahud Kehinde
A New Criterion for Lag-Length Selection in Unit Root Tests
2
6
298
298
2014-01-01
2014-01-01
10.11648/j.ajtas.20130206.28
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20130206.28
© Science Publishing Group
Using GLS to Generate Forecasts in Regression Models with Auto-correlated Disturbances with simulation and Palestinian Market Index Data
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140301.12
This paper involves an important statistical problem concerning forecasting in regression models in time series processes. It is well known that the most famous method of estimating and forecasting is the Ordinary Least Squares (OLS). OLS may be not the optimal in this context. So over the years many specialized estimation techniques have been developed, for example Generalized Least Squares (GLS). We are comparing the forecasting based on some estimators with the prediction using the GLS estimate. This comparison will be used by what is known as measures of forecast accuracy. We conduct an extensive computer simulation time series data, to make comparison among these methods. The similar forecasting criteria were developed and evaluated for the real data set on daily closing price in the Palestinian market index (Alquds Index). The data consists of 164 monthly observations and obtained from the website of the Palestine Stock Exchange. The main finding is that, for forecasting purposes there is not much gained in trying to identifying the exact order and form of the auto-correlated disturbances by using GLS estimation method. In addition, we noticed that the accuracy of forecasting using GLS method does not differ substantially than the other methods as Maximum Likelihood Estimation (MLE), Minimize Conditional Sum of Squares (CSS) and the combination of these two methods. Moreover, for parameter estimation, the GLS is nearly as efficient as the exact parameter estimation. On the other hand, the Ordinary Least Squares (OLS) method performs much less efficient than the other estimation methods and producing poor forecasting accuracy.
This paper involves an important statistical problem concerning forecasting in regression models in time series processes. It is well known that the most famous method of estimating and forecasting is the Ordinary Least Squares (OLS). OLS may be not the optimal in this context. So over the years many specialized estimation techniques have been developed, for example Generalized Least Squares (GLS). We are comparing the forecasting based on some estimators with the prediction using the GLS estimate. This comparison will be used by what is known as measures of forecast accuracy. We conduct an extensive computer simulation time series data, to make comparison among these methods. The similar forecasting criteria were developed and evaluated for the real data set on daily closing price in the Palestinian market index (Alquds Index). The data consists of 164 monthly observations and obtained from the website of the Palestine Stock Exchange. The main finding is that, for forecasting purposes there is not much gained in trying to identifying the exact order and form of the auto-correlated disturbances by using GLS estimation method. In addition, we noticed that the accuracy of forecasting using GLS method does not differ substantially than the other methods as Maximum Likelihood Estimation (MLE), Minimize Conditional Sum of Squares (CSS) and the combination of these two methods. Moreover, for parameter estimation, the GLS is nearly as efficient as the exact parameter estimation. On the other hand, the Ordinary Least Squares (OLS) method performs much less efficient than the other estimation methods and producing poor forecasting accuracy.
Using GLS to Generate Forecasts in Regression Models with Auto-correlated Disturbances with simulation and Palestinian Market Index Data
doi:10.11648/j.ajtas.20140301.12
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
Samir K. Safi
Ehab A. Abu Saif
Using GLS to Generate Forecasts in Regression Models with Auto-correlated Disturbances with simulation and Palestinian Market Index Data
3
1
17
17
2014-01-01
2014-01-01
10.11648/j.ajtas.20140301.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140301.12
© Science Publishing Group
Bayesian Model Averaging: An Application to the Determinants of Airport Departure Delay in Uganda
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140301.11
Bayesian model averaging was employed to study the dynamics of aircraft departure delay based on airport operational data of aviation and meteorological parameters collected on daily basis for the period 2004 through 2008 in matrix X. Models were evaluated using the R programming language mainly to establish the combinations of variables that could formulate the best model through assessing their importance. Findings showed that out of the sixteen covariates, 62.5% were suitable for model inclusion to determine aircraft departure delay of which 40% exhibited negative coefficients. The following parameters were found to negatively affect departure delay; number of aircrafts that departed on time (-0.562), number of persons on board of the arriving aircrafts (-0.002), daily average visibility (-0.001) and year (-1.605). Comparison between Posterior Model Probabilities (PMP Exact) and that based on Markov Chain Monte Carlo (PMP MCMC) revealed a high correlation (0.998; p<0.01).The study recommended the MCMC as providing a more efficient approach to modelling the determinants of aircraft departure delay at an airport.
Bayesian model averaging was employed to study the dynamics of aircraft departure delay based on airport operational data of aviation and meteorological parameters collected on daily basis for the period 2004 through 2008 in matrix X. Models were evaluated using the R programming language mainly to establish the combinations of variables that could formulate the best model through assessing their importance. Findings showed that out of the sixteen covariates, 62.5% were suitable for model inclusion to determine aircraft departure delay of which 40% exhibited negative coefficients. The following parameters were found to negatively affect departure delay; number of aircrafts that departed on time (-0.562), number of persons on board of the arriving aircrafts (-0.002), daily average visibility (-0.001) and year (-1.605). Comparison between Posterior Model Probabilities (PMP Exact) and that based on Markov Chain Monte Carlo (PMP MCMC) revealed a high correlation (0.998; p<0.01).The study recommended the MCMC as providing a more efficient approach to modelling the determinants of aircraft departure delay at an airport.
Bayesian Model Averaging: An Application to the Determinants of Airport Departure Delay in Uganda
doi:10.11648/j.ajtas.20140301.11
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
Wesonga Ronald
Nabugoomu Fabian
Bayesian Model Averaging: An Application to the Determinants of Airport Departure Delay in Uganda
3
1
5
5
2014-01-01
2014-01-01
10.11648/j.ajtas.20140301.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140301.11
© Science Publishing Group
Bayesian Estimation of Reliability Function for A Changing Exponential Family Model under Different Loss Functions
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140301.14
The paper deals with estimating shift point which occurs in any sequences of independent observations x1, x2, …, xm, xm+1, …, xn of poisson and geometric distributions. This shift point occurs in the sequence when xm i. e. m life data are observed. With known shift point 'm', the Bayes estimator on befor and after shift process means θ1 and θ2 are derived for symmetric and assymetric loss functions. The sensitivity analysis of Bayes estimators are carried out by simulation and numerical comparisons with R-programming. The results show the effectiveness of shift in sequences of both poisson and geometric distributions.
The paper deals with estimating shift point which occurs in any sequences of independent observations x1, x2, …, xm, xm+1, …, xn of poisson and geometric distributions. This shift point occurs in the sequence when xm i. e. m life data are observed. With known shift point 'm', the Bayes estimator on befor and after shift process means θ1 and θ2 are derived for symmetric and assymetric loss functions. The sensitivity analysis of Bayes estimators are carried out by simulation and numerical comparisons with R-programming. The results show the effectiveness of shift in sequences of both poisson and geometric distributions.
Bayesian Estimation of Reliability Function for A Changing Exponential Family Model under Different Loss Functions
doi:10.11648/j.ajtas.20140301.14
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
P. Nasiri
N. Jafari
A. Jafari
Bayesian Estimation of Reliability Function for A Changing Exponential Family Model under Different Loss Functions
3
1
30
30
2014-01-01
2014-01-01
10.11648/j.ajtas.20140301.14
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140301.14
© Science Publishing Group
The G-Method for Rank Determination in Rank Order Statistics for ‘C’ Samples
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140301.13
This paper proposes a statistical method called ‘the G method’ to highlight its generalized nature for the determination and assignment of ranks to sample observations drawn from several populations for possible use in further analyses. The sampled populations may be measurements on as low as the ordinal scale and need not be continuous or even numeric. The proposed rank determination statistical model intrinsically and structurally provides for the breaking of possible ties between sample observations and automatically assigning such observations their mean ranks. This approach and hence the proposed model therefore obviate the need for the sampled populations to be continuous. They may be discrete or even non-numeric measurements on as low as the ordinal scale. The proposed method is of more generalized and wider applicability than an existing formulation which can be used with only continuous populations and is easier to use in practice than the usual traditional method which is often tedious and cumbersome, especially with large samples. The proposed method is illustrated with some data and shown to yield the same results as other existing methods where these methods are equally applicable.
This paper proposes a statistical method called ‘the G method’ to highlight its generalized nature for the determination and assignment of ranks to sample observations drawn from several populations for possible use in further analyses. The sampled populations may be measurements on as low as the ordinal scale and need not be continuous or even numeric. The proposed rank determination statistical model intrinsically and structurally provides for the breaking of possible ties between sample observations and automatically assigning such observations their mean ranks. This approach and hence the proposed model therefore obviate the need for the sampled populations to be continuous. They may be discrete or even non-numeric measurements on as low as the ordinal scale. The proposed method is of more generalized and wider applicability than an existing formulation which can be used with only continuous populations and is easier to use in practice than the usual traditional method which is often tedious and cumbersome, especially with large samples. The proposed method is illustrated with some data and shown to yield the same results as other existing methods where these methods are equally applicable.
The G-Method for Rank Determination in Rank Order Statistics for ‘C’ Samples
doi:10.11648/j.ajtas.20140301.13
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
Oyeka Ikewelugo Cyprian Anaene.
Nwankwo Chike H.
Awopeju K. Abidemi
The G-Method for Rank Determination in Rank Order Statistics for ‘C’ Samples
3
1
24
24
2014-01-01
2014-01-01
10.11648/j.ajtas.20140301.13
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140301.13
© Science Publishing Group
Direct and Indirect Effects in Dummy Variable Regression
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140302.13
This paper proposes and develops the use of the non-cummulative dummy variables of 1’s and 0’s to represent levels of parent independent variables in dummy variable multiple regression models. The regression coefficients obtained using the proposed methods are easier to interprete and clearly understand than the use of the cummulatively coded ordinal dummy variables of 1’s and 0’s that could be used for the same purpose. The proposed method also enables the simultaneous estimation of the total, absolute or overall effect of a parent independent variable as well as its direct effect through its representative dummies and its indirect effect on a given independent variable through the mediation of other parent independent variables in the model was demonstrated. The use of these procedures was illustrated with an example.
This paper proposes and develops the use of the non-cummulative dummy variables of 1’s and 0’s to represent levels of parent independent variables in dummy variable multiple regression models. The regression coefficients obtained using the proposed methods are easier to interprete and clearly understand than the use of the cummulatively coded ordinal dummy variables of 1’s and 0’s that could be used for the same purpose. The proposed method also enables the simultaneous estimation of the total, absolute or overall effect of a parent independent variable as well as its direct effect through its representative dummies and its indirect effect on a given independent variable through the mediation of other parent independent variables in the model was demonstrated. The use of these procedures was illustrated with an example.
Direct and Indirect Effects in Dummy Variable Regression
doi:10.11648/j.ajtas.20140302.13
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
Oyeka I. C. A.
Nwankwo Chike H.
Direct and Indirect Effects in Dummy Variable Regression
3
2
48
48
2014-01-01
2014-01-01
10.11648/j.ajtas.20140302.13
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140302.13
© Science Publishing Group
Markov Chain Model and Its Application to Annual Rainfall Distribution for Crop Production
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140302.12
A stochastic process with a first order dependence in discrete state and time is described as Markov chain. This principle was used to formulate a four state model for annual rainfall distribution in Minna with respect to crop production. The model is designed such that if given any of the four state in a given year, it is possible to determine quantitatively the probability of making transition to any other three states in the following year(s) and in the long-run. The model was used to study the data of annual rainfall in Minna. The results show that in the long run 14% of annual rainfall shall be low rainfall, 34% annual rainfall will be moderate rainfall also well spread, 47% of the annual rainfall shall be high rainfall and 5% of the annual rainfall shall be moderate rainfall not well spread respectively. The model provides some information about rainfall in relation to crops cultivation that could be used by the farmers and the government to plan strategy for high crop production in Minna and the immediate environment.
A stochastic process with a first order dependence in discrete state and time is described as Markov chain. This principle was used to formulate a four state model for annual rainfall distribution in Minna with respect to crop production. The model is designed such that if given any of the four state in a given year, it is possible to determine quantitatively the probability of making transition to any other three states in the following year(s) and in the long-run. The model was used to study the data of annual rainfall in Minna. The results show that in the long run 14% of annual rainfall shall be low rainfall, 34% annual rainfall will be moderate rainfall also well spread, 47% of the annual rainfall shall be high rainfall and 5% of the annual rainfall shall be moderate rainfall not well spread respectively. The model provides some information about rainfall in relation to crops cultivation that could be used by the farmers and the government to plan strategy for high crop production in Minna and the immediate environment.
Markov Chain Model and Its Application to Annual Rainfall Distribution for Crop Production
doi:10.11648/j.ajtas.20140302.12
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
Abubakar Usman Yusuf
Lawal Adamu
Muhammed Abdullahi
Markov Chain Model and Its Application to Annual Rainfall Distribution for Crop Production
3
2
43
43
2014-01-01
2014-01-01
10.11648/j.ajtas.20140302.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140302.12
© Science Publishing Group
Comparison of Parametric and Nonparametric Item Response Techniques in Determining Differential Item Functioning in Polytomous Scale
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140302.11
This study aims to compare parametric and nonparametric methods based on Item Response Theory in determining differential item functioning in polytomous scales. DIF analysis based on parametric IRT was conducted by using parameters comparison method. For nonparametric IRT analysis, DIF is determined by comparison of area indices pertaining to ICC obtained for reference and focal groups. The Comparisons were conducted on data sets from TIMSS 2011 8th Class students survey where data set pertaining to responses of students to "Attitudes Toward Mathematics" composing of samplings from Turkey and South Korea and it was determined if it incorporated DIF according to country and sex differences. It is observed that parametric and nonparametric methods produce generally similar results for DIF analysis in terms of countries. Nevertheless, DIF analysis results for country based sex groups differed according to techniques based on parametric and nonparametric IRT. Results of the study showed that items incorporating DIF differed as to preferred technique. This indicated importance of choosing the best technique in studies to detect whether scale items incorporates DIF or not.
This study aims to compare parametric and nonparametric methods based on Item Response Theory in determining differential item functioning in polytomous scales. DIF analysis based on parametric IRT was conducted by using parameters comparison method. For nonparametric IRT analysis, DIF is determined by comparison of area indices pertaining to ICC obtained for reference and focal groups. The Comparisons were conducted on data sets from TIMSS 2011 8th Class students survey where data set pertaining to responses of students to "Attitudes Toward Mathematics" composing of samplings from Turkey and South Korea and it was determined if it incorporated DIF according to country and sex differences. It is observed that parametric and nonparametric methods produce generally similar results for DIF analysis in terms of countries. Nevertheless, DIF analysis results for country based sex groups differed according to techniques based on parametric and nonparametric IRT. Results of the study showed that items incorporating DIF differed as to preferred technique. This indicated importance of choosing the best technique in studies to detect whether scale items incorporates DIF or not.
Comparison of Parametric and Nonparametric Item Response Techniques in Determining Differential Item Functioning in Polytomous Scale
doi:10.11648/j.ajtas.20140302.11
American Journal of Theoretical and Applied Statistics
2014-01-01
© Science Publishing Group
T. Oguz Basokcu
Tuncay Ogretmen
Comparison of Parametric and Nonparametric Item Response Techniques in Determining Differential Item Functioning in Polytomous Scale
3
2
38
38
2014-01-01
2014-01-01
10.11648/j.ajtas.20140302.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140302.11
© Science Publishing Group
Logit Models for Household Food Insecurity Classification
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140302.14
Micro-level measurement of food insecurity is a necessary approach towards a more feasible solution to the global problem for proper classification of households by food insecurity status. Measurement of food insecurity is a challenge because it is a multi-faceted latent and continuous phenomenon explained by a wide range of both quantitative and qualitative variables. In this paper, we examined the quantitative variables and applied exploratory factor analysis to identify which of them significantly influence household food insecurity. Logit models were then developed using the variables identified. Further, empirical data obtained from Tororo and Busia rural households in Uganda were used to fit the models. Four logit models based on four scenarios were developed and compared. The key findings pointed to the fact that if households were to be correctly analyzed and classified into the right food security category, a hybrid dependent variable that represents as many aspects of food insecurity as possible should be used. The model correctly classified 90 % of the combined households for two districts. However, when fitted for separate districts, it was established that 99% of households in Busia and 96% in Tororo district respectively, were found to be food insecure
Micro-level measurement of food insecurity is a necessary approach towards a more feasible solution to the global problem for proper classification of households by food insecurity status. Measurement of food insecurity is a challenge because it is a multi-faceted latent and continuous phenomenon explained by a wide range of both quantitative and qualitative variables. In this paper, we examined the quantitative variables and applied exploratory factor analysis to identify which of them significantly influence household food insecurity. Logit models were then developed using the variables identified. Further, empirical data obtained from Tororo and Busia rural households in Uganda were used to fit the models. Four logit models based on four scenarios were developed and compared. The key findings pointed to the fact that if households were to be correctly analyzed and classified into the right food security category, a hybrid dependent variable that represents as many aspects of food insecurity as possible should be used. The model correctly classified 90 % of the combined households for two districts. However, when fitted for separate districts, it was established that 99% of households in Busia and 96% in Tororo district respectively, were found to be food insecure
Logit Models for Household Food Insecurity Classification
doi:10.11648/j.ajtas.20140302.14
American Journal of Theoretical and Applied Statistics
2014-04-22
© Science Publishing Group
Abraham Yeyo Owino
Leonard Kiboijana Atuhaire
Ronald Wesonga
Fabian Nabugoomu
Elijah Muwanga-Zaake
Logit Models for Household Food Insecurity Classification
3
2
54
54
2014-04-22
2014-04-22
10.11648/j.ajtas.20140302.14
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140302.14
© Science Publishing Group
On Some Classical Properties of Doubly Truncated Mixture of Burr XII and Weibull Distributions
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140302.15
Limited work has been conducted on the doubly truncation for the mixture of different distributions. This paper is concerned with the doubly truncated mixture of Burr XII and Weibull distributions. In this paper, classical properties of the doubly truncated mixture of Burr XII and Weibull distributions have been derived. Cumulative distribution function, hazard rate, failure rate, inverse hazard function, odd function and the cumulative hazard function, rth moment about origin, moment generating function characteristic function, moments about origin and mean, mean and variance, measure of skewness and kurtosis have been discussed.
Limited work has been conducted on the doubly truncation for the mixture of different distributions. This paper is concerned with the doubly truncated mixture of Burr XII and Weibull distributions. In this paper, classical properties of the doubly truncated mixture of Burr XII and Weibull distributions have been derived. Cumulative distribution function, hazard rate, failure rate, inverse hazard function, odd function and the cumulative hazard function, rth moment about origin, moment generating function characteristic function, moments about origin and mean, mean and variance, measure of skewness and kurtosis have been discussed.
On Some Classical Properties of Doubly Truncated Mixture of Burr XII and Weibull Distributions
doi:10.11648/j.ajtas.20140302.15
American Journal of Theoretical and Applied Statistics
2014-04-30
© Science Publishing Group
Muhammad Daniyal
Muhammad Aleem
Tahir Nawaz
On Some Classical Properties of Doubly Truncated Mixture of Burr XII and Weibull Distributions
3
2
59
59
2014-04-30
2014-04-30
10.11648/j.ajtas.20140302.15
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140302.15
© Science Publishing Group
Artificial Neural Network Application in Modelling Revenue Returns from Mobile Payment Services in Kenya
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140303.11
Artificial Neural Networks has recently shown a great applicability in time-series analysis and forecasting thus correctly deducing the unseen part of the population even if the sample data contain noisy information. In this paper we used Neural Network to model revenue returns from mobile payment services using dataset extracted from Central Bank of Kenya website. The network with one or two hidden layers was tested with various combination of neurons, and results were compared in terms of forecasting error. It was observed that ANN if properly trained accurately forecast Revenue returns on mobile payments services in Kenya.
Artificial Neural Networks has recently shown a great applicability in time-series analysis and forecasting thus correctly deducing the unseen part of the population even if the sample data contain noisy information. In this paper we used Neural Network to model revenue returns from mobile payment services using dataset extracted from Central Bank of Kenya website. The network with one or two hidden layers was tested with various combination of neurons, and results were compared in terms of forecasting error. It was observed that ANN if properly trained accurately forecast Revenue returns on mobile payments services in Kenya.
Artificial Neural Network Application in Modelling Revenue Returns from Mobile Payment Services in Kenya
doi:10.11648/j.ajtas.20140303.11
American Journal of Theoretical and Applied Statistics
2014-05-06
© Science Publishing Group
Kyalo Richard
Waititu Anthony
Wanjoya Anthony
Artificial Neural Network Application in Modelling Revenue Returns from Mobile Payment Services in Kenya
3
3
64
64
2014-05-06
2014-05-06
10.11648/j.ajtas.20140303.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140303.11
© Science Publishing Group
Stability Analysis of Mathematical Model of Caprine Arthritis Encephalitis
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140303.13
A mathematical model of Caprine arthritis encephalitis (CAE) that has a great economic impact in goat farming was investigated. Infection route of CAE virus is mainly by vertical transmission through breast milk. Droplet infection and sexual infection is known as other transmission path. From these facts, mathematical model of CAE was created based on the model of spread of sexually transmitted disease (STD) in the human case. The model is analyzed to determine the stability.
A mathematical model of Caprine arthritis encephalitis (CAE) that has a great economic impact in goat farming was investigated. Infection route of CAE virus is mainly by vertical transmission through breast milk. Droplet infection and sexual infection is known as other transmission path. From these facts, mathematical model of CAE was created based on the model of spread of sexually transmitted disease (STD) in the human case. The model is analyzed to determine the stability.
Stability Analysis of Mathematical Model of Caprine Arthritis Encephalitis
doi:10.11648/j.ajtas.20140303.13
American Journal of Theoretical and Applied Statistics
2014-05-28
© Science Publishing Group
Teppei Hirata
Yoshihito Yonahara
Faramarz Asharif
Shiro Tamaki
Tsutomu Omatsu
Tetsuya Mizutani
Stability Analysis of Mathematical Model of Caprine Arthritis Encephalitis
3
3
80
80
2014-05-28
2014-05-28
10.11648/j.ajtas.20140303.13
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140303.13
© Science Publishing Group
Estimation of Population Mean Under Unequal Probability Sampling with Unknown Selection Probabilities
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140303.12
In many practical problems of biology, medicine, and social sciences, inference about population mean has to be made froma sample generated with unknown selection probabilities that may vary between the units in the population. The most common approach to analysis of such samples is to use the ordinary sample mean as an estimator of population mean, under the assumption that the sample is “representative” of the population, although the concept of representative sampling is not given a formal mathematical definition. The main objective of this paper is to identify a necessary and sufficient condition for unbiasedness of the ordinary sample mean under unequal probability sampling. It is shown that in the process of sampling with unequal selection probabilities with a fixed sample size, expectation of the ordinary sample mean is exactly equal to population mean plus the covariance of the unit-level outcome with the unit-level relative selection probability, where the relative selection probability is defined as the ratio of the unit’s selection probability to the mean selection probability in the population. Hence, ordinary sample mean is unbiased if and only if the unit-level outcome is uncorrelated with the unit-level relative selection probability. Samples generated under this condition may be considered “representative” of the population for the purpose of inference about population mean. It is also shown that under representative sampling, asymptotically conservative confidence intervals for population mean can be constructed based on the ordinary sample variance estimator as long as there are no positive correlations among the sample observations (negative correlations are allowed) and conditions for asymptotic normality of the sample mean are satisfied.
In many practical problems of biology, medicine, and social sciences, inference about population mean has to be made froma sample generated with unknown selection probabilities that may vary between the units in the population. The most common approach to analysis of such samples is to use the ordinary sample mean as an estimator of population mean, under the assumption that the sample is “representative” of the population, although the concept of representative sampling is not given a formal mathematical definition. The main objective of this paper is to identify a necessary and sufficient condition for unbiasedness of the ordinary sample mean under unequal probability sampling. It is shown that in the process of sampling with unequal selection probabilities with a fixed sample size, expectation of the ordinary sample mean is exactly equal to population mean plus the covariance of the unit-level outcome with the unit-level relative selection probability, where the relative selection probability is defined as the ratio of the unit’s selection probability to the mean selection probability in the population. Hence, ordinary sample mean is unbiased if and only if the unit-level outcome is uncorrelated with the unit-level relative selection probability. Samples generated under this condition may be considered “representative” of the population for the purpose of inference about population mean. It is also shown that under representative sampling, asymptotically conservative confidence intervals for population mean can be constructed based on the ordinary sample variance estimator as long as there are no positive correlations among the sample observations (negative correlations are allowed) and conditions for asymptotic normality of the sample mean are satisfied.
Estimation of Population Mean Under Unequal Probability Sampling with Unknown Selection Probabilities
doi:10.11648/j.ajtas.20140303.12
American Journal of Theoretical and Applied Statistics
2014-05-21
© Science Publishing Group
Emil Scosyrev
Estimation of Population Mean Under Unequal Probability Sampling with Unknown Selection Probabilities
3
3
72
72
2014-05-21
2014-05-21
10.11648/j.ajtas.20140303.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140303.12
© Science Publishing Group
Statistical Process Control Analysis Based on Software Q-Das
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140304.12
The software Q-das used in statistical analysis of the production process can realize the real-time monitoring during the production process. So as to effectively support the needs of the factory in data management, long-term backup and automatic report generation. This paper tells some specific evaluation method of statistical quality control, such as single value chart, control charts, process capability index, etc. Through these configurations of figures, tables, and statistical analysis, “correctly observe” the entire production process. Control and suppress the fluctuations that occur during production. At last, this paper tells the Q-das software in statistical process control analysis application based on a specific example.
The software Q-das used in statistical analysis of the production process can realize the real-time monitoring during the production process. So as to effectively support the needs of the factory in data management, long-term backup and automatic report generation. This paper tells some specific evaluation method of statistical quality control, such as single value chart, control charts, process capability index, etc. Through these configurations of figures, tables, and statistical analysis, “correctly observe” the entire production process. Control and suppress the fluctuations that occur during production. At last, this paper tells the Q-das software in statistical process control analysis application based on a specific example.
Statistical Process Control Analysis Based on Software Q-Das
doi:10.11648/j.ajtas.20140304.12
American Journal of Theoretical and Applied Statistics
2014-07-14
© Science Publishing Group
Wu Guoqing
Luo Yiping
Ren Hongjuan
Statistical Process Control Analysis Based on Software Q-Das
3
4
95
95
2014-07-14
2014-07-14
10.11648/j.ajtas.20140304.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140304.12
© Science Publishing Group
Research on Construction of the Green Logistics Evaluation Index System and Determination of Index Weight
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140304.13
The evaluation index system of green logistics development is the focus of this paper. Through the questionnaire investigation and expert scoring method obtains the judgment matrix. By using the analytic hierarchy process and fuzzy comprehensive evaluation method establishes the weight of each index. The conclusion of this paper would lay the foundation for the theory and method of green logistics statistical evaluation.
The evaluation index system of green logistics development is the focus of this paper. Through the questionnaire investigation and expert scoring method obtains the judgment matrix. By using the analytic hierarchy process and fuzzy comprehensive evaluation method establishes the weight of each index. The conclusion of this paper would lay the foundation for the theory and method of green logistics statistical evaluation.
Research on Construction of the Green Logistics Evaluation Index System and Determination of Index Weight
doi:10.11648/j.ajtas.20140304.13
American Journal of Theoretical and Applied Statistics
2014-07-22
© Science Publishing Group
Li Zhou
Jie Zhu
Jian Guo
Research on Construction of the Green Logistics Evaluation Index System and Determination of Index Weight
3
4
99
99
2014-07-22
2014-07-22
10.11648/j.ajtas.20140304.13
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140304.13
© Science Publishing Group
On the Detection of Influential Outliers in Linear Regression Analysis
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140304.14
In this paper, we propose a measure for detecting influential outliers in linear regression analysis. The performance of the proposed method, called the Coefficient of Determination Ratio (CDR), is then compared with some standard measures of influence, namely: Cook’s distance, studentised deleted residuals, leverage values, covariance ratio, and difference in fits standardized. Two existing datasets, one artificial and one real, are employed for the comparison and to illustrate the efficiency of the proposed measure. It is observed that the proposed measure appears more responsive to detecting influential outliers in both simple and multiple linear regression analyses. The CDR thus provides a useful alternative to existing methods for detecting outliers in structured datasets.
In this paper, we propose a measure for detecting influential outliers in linear regression analysis. The performance of the proposed method, called the Coefficient of Determination Ratio (CDR), is then compared with some standard measures of influence, namely: Cook’s distance, studentised deleted residuals, leverage values, covariance ratio, and difference in fits standardized. Two existing datasets, one artificial and one real, are employed for the comparison and to illustrate the efficiency of the proposed measure. It is observed that the proposed measure appears more responsive to detecting influential outliers in both simple and multiple linear regression analyses. The CDR thus provides a useful alternative to existing methods for detecting outliers in structured datasets.
On the Detection of Influential Outliers in Linear Regression Analysis
doi:10.11648/j.ajtas.20140304.14
American Journal of Theoretical and Applied Statistics
2014-07-30
© Science Publishing Group
Arimiyaw Zakaria
Nathaniel Kwamina Howard
Bismark Kwao Nkansah
On the Detection of Influential Outliers in Linear Regression Analysis
3
4
106
106
2014-07-30
2014-07-30
10.11648/j.ajtas.20140304.14
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140304.14
© Science Publishing Group
Neighbor Designs: A New Approach of Local Control
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140304.15
It is well known that randomization, replication and local control play important role in experimental design. Neighbor designs provide a tool for local control in situation where experimental units are influenced by neighboring units. A neighbor design is called one-dimensional if neighbor effects are controlled in only one way, i.e., either in row or in column direction. In two-dimensional design, neighbor effects are controlled in both ways (rows and columns). In this paper the concept of neighbor designs, its types and importance is discussed with examples. Models of Neighbor effects for different situations are also discussed.
It is well known that randomization, replication and local control play important role in experimental design. Neighbor designs provide a tool for local control in situation where experimental units are influenced by neighboring units. A neighbor design is called one-dimensional if neighbor effects are controlled in only one way, i.e., either in row or in column direction. In two-dimensional design, neighbor effects are controlled in both ways (rows and columns). In this paper the concept of neighbor designs, its types and importance is discussed with examples. Models of Neighbor effects for different situations are also discussed.
Neighbor Designs: A New Approach of Local Control
doi:10.11648/j.ajtas.20140304.15
American Journal of Theoretical and Applied Statistics
2014-08-08
© Science Publishing Group
Naqvi Hamad
Najeeb Haider
Neighbor Designs: A New Approach of Local Control
3
4
110
110
2014-08-08
2014-08-08
10.11648/j.ajtas.20140304.15
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140304.15
© Science Publishing Group
Comparison of Precision of Systematic Sampling with Some other Probability Samplings
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140304.16
This paper provides some discussion about the problem of precision of systematic and other related sampling methods. Comparison among these samples and the estimators are made using some empirical evidences from a finite population analysis.
This paper provides some discussion about the problem of precision of systematic and other related sampling methods. Comparison among these samples and the estimators are made using some empirical evidences from a finite population analysis.
Comparison of Precision of Systematic Sampling with Some other Probability Samplings
doi:10.11648/j.ajtas.20140304.16
American Journal of Theoretical and Applied Statistics
2014-08-13
© Science Publishing Group
Habib Ahmed Elsayir
Comparison of Precision of Systematic Sampling with Some other Probability Samplings
3
4
116
116
2014-08-13
2014-08-13
10.11648/j.ajtas.20140304.16
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140304.16
© Science Publishing Group
Homogeneous Fluid Turbulence before the Final Period of Decay for Four-Point Correlation in a Rotating System for First-Order Reactant
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140304.11
Following Deissler’s approach, the decay in homogeneous turbulence at times preceding to the ultimate phase for the concentration fluctuation of a dilute contaminant undergoing a first-order chemical reaction in a rotating system for the case of four-point correlation is studied. Two-, three-, and four-point correlation equations have been obtained and the correlation equations are converted to spectral form by their Fourier-transform, the terms containing quintuple correlations are neglected in comparison to the third and fourth order correlation terms. Finally, integrating the energy spectrum over all wave numbers, the energy decay law of homogeneous turbulent flow for the concentration fluctuations ahead of the ultimate phase in a rotating system for four-point correlation has been obtained and it is shown by graphically in the text.
Following Deissler’s approach, the decay in homogeneous turbulence at times preceding to the ultimate phase for the concentration fluctuation of a dilute contaminant undergoing a first-order chemical reaction in a rotating system for the case of four-point correlation is studied. Two-, three-, and four-point correlation equations have been obtained and the correlation equations are converted to spectral form by their Fourier-transform, the terms containing quintuple correlations are neglected in comparison to the third and fourth order correlation terms. Finally, integrating the energy spectrum over all wave numbers, the energy decay law of homogeneous turbulent flow for the concentration fluctuations ahead of the ultimate phase in a rotating system for four-point correlation has been obtained and it is shown by graphically in the text.
Homogeneous Fluid Turbulence before the Final Period of Decay for Four-Point Correlation in a Rotating System for First-Order Reactant
doi:10.11648/j.ajtas.20140304.11
American Journal of Theoretical and Applied Statistics
2014-07-03
© Science Publishing Group
M. Monuar Hossain
M. Abu Bkar Pk
M. S. Alam Sarker
Homogeneous Fluid Turbulence before the Final Period of Decay for Four-Point Correlation in a Rotating System for First-Order Reactant
3
4
89
89
2014-07-03
2014-07-03
10.11648/j.ajtas.20140304.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140304.11
© Science Publishing Group
A Statistical Analysis of Value of Imports in Nigeria
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140305.11
The value of products imported into a country each year goes a long way to tell us how much the product is appreciated in the country. Some products form the major imports of some countries yearly. Previous studies have focused on the major determinants of imports in some countries, including Nigeria. In the present study, we used a regression approach to identify the major significant imports in Nigeria. This will help in the effective distribution of human resources and services and improve balance of payments. In using the above approach, the method of stepwise regression and transformation of the data via first-order differencing were employed to remove multicollinearity from the data. A predictive model was then specified for the prediction of future imports in the country. Our results show that the major significant imports in Nigeria during the period under study are miscellaneous manufactured goods, machinery and transport equipment, food and live animal, beverages and tobacco.
The value of products imported into a country each year goes a long way to tell us how much the product is appreciated in the country. Some products form the major imports of some countries yearly. Previous studies have focused on the major determinants of imports in some countries, including Nigeria. In the present study, we used a regression approach to identify the major significant imports in Nigeria. This will help in the effective distribution of human resources and services and improve balance of payments. In using the above approach, the method of stepwise regression and transformation of the data via first-order differencing were employed to remove multicollinearity from the data. A predictive model was then specified for the prediction of future imports in the country. Our results show that the major significant imports in Nigeria during the period under study are miscellaneous manufactured goods, machinery and transport equipment, food and live animal, beverages and tobacco.
A Statistical Analysis of Value of Imports in Nigeria
doi:10.11648/j.ajtas.20140305.11
American Journal of Theoretical and Applied Statistics
2014-08-17
© Science Publishing Group
Nwaigwe, Chrysogonus Chinagorom
Iwu Hycinth Chukwudi
A Statistical Analysis of Value of Imports in Nigeria
3
5
124
124
2014-08-17
2014-08-17
10.11648/j.ajtas.20140305.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140305.11
© Science Publishing Group
Partially Neighbor Balanced Designs for Circular Blocks
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140305.12
A partially neighbor balanced design is a design in which for any fixed treatment, other treatments occur as neighbor λi times. This paper generates infinite series of one-dimensional partially neighbor balanced designs for v = n treatments. The blocks used in these designs are considered circular. Designs given here are partially balanced in terms of nearest neighbors and not necessarily in terms of variance. Binary and non-binary concepts have been used for the construction of designs. Theorem 1 generates binary generalized 2-neighbor designs and theorem 2 generates non-binary generalized 3-neighbor designs. These theorems generate designs for v = n treatments i.e., for odd and even number of treatments simultaneously. This concept remains relatively under-explored in the literature. The objective is to decrease error variance due to neighbor effect and reduce computational cost.
A partially neighbor balanced design is a design in which for any fixed treatment, other treatments occur as neighbor λi times. This paper generates infinite series of one-dimensional partially neighbor balanced designs for v = n treatments. The blocks used in these designs are considered circular. Designs given here are partially balanced in terms of nearest neighbors and not necessarily in terms of variance. Binary and non-binary concepts have been used for the construction of designs. Theorem 1 generates binary generalized 2-neighbor designs and theorem 2 generates non-binary generalized 3-neighbor designs. These theorems generate designs for v = n treatments i.e., for odd and even number of treatments simultaneously. This concept remains relatively under-explored in the literature. The objective is to decrease error variance due to neighbor effect and reduce computational cost.
Partially Neighbor Balanced Designs for Circular Blocks
doi:10.11648/j.ajtas.20140305.12
American Journal of Theoretical and Applied Statistics
2014-08-22
© Science Publishing Group
Naqvi Hamad
Partially Neighbor Balanced Designs for Circular Blocks
3
5
129
129
2014-08-22
2014-08-22
10.11648/j.ajtas.20140305.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140305.12
© Science Publishing Group
Analysis of Indirect Human Influences and its Bad Impacts on Ecosystems of Natural Forest Resources (Sundarbans) in Bangladesh
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140305.13
Sundarbans plays a vital role for human survivability from cradle to grave including tangible and intangible benefits. The total forest area of Bangladesh is about 2.47 million ha, which accounts for about 18% of the total land area of the country, and which constitutes 0.15% of the world’s total tropical forests (Haque, 2000), although an estimate from 1993 only put the tree cover at 5-7% of the country’s area (FAO,1993). Sundarbans comprises 45% of the total productive forests in Bangladesh, and contributes about 50% of forest related revenue (Awal, 2007). The Sundarbans is the largest single mangrove forest in the world, occupying about 6,029 km2 in Bangladesh and the rest in India (Iftekhar & Islam, 2004). At the advent of British rule in 1765, the Sundarbans forests were double their present size (Seidensticker, and Hai, 1983; Khan, 1997). But it is facing tremendous problems (Awal, 207, 2009, 2014). However, a serious killer disease (top dying) of H. fomes in Sundarbans is affecting millions of the trees (Awal, 2007). The loss of H. fomes will have a major impact on the Sundarbans mangrove ecosystem, as well as lead to economic losses (Awal, 207, 2009, 2014). But, it is now under serious threatened through human direct and indirect destruction (Awal, 207, 2009, 2014), and by ecological pollution (Awal, 2007). The cause of this dieback is still unknown (Awal, 2007). The present work has investigated one of the possible factors that might be causing this top-dying namely the concentrations of various chemical elements present in the soil or sediments, particularly, Exchangeable K, heavy metals, though other chemical parameters such as the pH, moisture content of the soil or sediment and nutrient status were also assessed due to indirect human destruction of Sundarbans natural resources (Awal, 2007). A questionnaire survey was conducted among different groups of people inside and outside of Sundarbans to explore local perceptions as to the possible causes of top dying (Awal, 207, 2009, 2014). This confirmed the increase in top-dying prevalence (Awal, 2007).
Sundarbans plays a vital role for human survivability from cradle to grave including tangible and intangible benefits. The total forest area of Bangladesh is about 2.47 million ha, which accounts for about 18% of the total land area of the country, and which constitutes 0.15% of the world’s total tropical forests (Haque, 2000), although an estimate from 1993 only put the tree cover at 5-7% of the country’s area (FAO,1993). Sundarbans comprises 45% of the total productive forests in Bangladesh, and contributes about 50% of forest related revenue (Awal, 2007). The Sundarbans is the largest single mangrove forest in the world, occupying about 6,029 km2 in Bangladesh and the rest in India (Iftekhar & Islam, 2004). At the advent of British rule in 1765, the Sundarbans forests were double their present size (Seidensticker, and Hai, 1983; Khan, 1997). But it is facing tremendous problems (Awal, 207, 2009, 2014). However, a serious killer disease (top dying) of H. fomes in Sundarbans is affecting millions of the trees (Awal, 2007). The loss of H. fomes will have a major impact on the Sundarbans mangrove ecosystem, as well as lead to economic losses (Awal, 207, 2009, 2014). But, it is now under serious threatened through human direct and indirect destruction (Awal, 207, 2009, 2014), and by ecological pollution (Awal, 2007). The cause of this dieback is still unknown (Awal, 2007). The present work has investigated one of the possible factors that might be causing this top-dying namely the concentrations of various chemical elements present in the soil or sediments, particularly, Exchangeable K, heavy metals, though other chemical parameters such as the pH, moisture content of the soil or sediment and nutrient status were also assessed due to indirect human destruction of Sundarbans natural resources (Awal, 2007). A questionnaire survey was conducted among different groups of people inside and outside of Sundarbans to explore local perceptions as to the possible causes of top dying (Awal, 207, 2009, 2014). This confirmed the increase in top-dying prevalence (Awal, 2007).
Analysis of Indirect Human Influences and its Bad Impacts on Ecosystems of Natural Forest Resources (Sundarbans) in Bangladesh
doi:10.11648/j.ajtas.20140305.13
American Journal of Theoretical and Applied Statistics
2014-08-24
© Science Publishing Group
Awal, Mohd Abdul
Analysis of Indirect Human Influences and its Bad Impacts on Ecosystems of Natural Forest Resources (Sundarbans) in Bangladesh
3
5
140
140
2014-08-24
2014-08-24
10.11648/j.ajtas.20140305.13
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140305.13
© Science Publishing Group
Investigation of Some Estimators Via Taylor Series Approach and an Application
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140305.14
In this study, the use of taylor series method in the calculation of some means with single auxiliary variable developed in a simple random sampling and mean square error unit ratio estimators having certain properties was investigated. An application was performed in respect thereof. The study population (mass) included 111 secondary schools from 18 districts of Trabzon province. Auxiliary variable (x) was taken as the number of students whereas the main variable (y) was taken as the number of teachers. Sample size was calculated as 45 for unit ratio that has certain features. Afterwards, theoretically proposed mean and units ratio estimators having certain properties were compared numerically. Random sampling was performed using the SPSS 20 program thus giving an equal chance to the units sampled and variability in the population was protected.
In this study, the use of taylor series method in the calculation of some means with single auxiliary variable developed in a simple random sampling and mean square error unit ratio estimators having certain properties was investigated. An application was performed in respect thereof. The study population (mass) included 111 secondary schools from 18 districts of Trabzon province. Auxiliary variable (x) was taken as the number of students whereas the main variable (y) was taken as the number of teachers. Sample size was calculated as 45 for unit ratio that has certain features. Afterwards, theoretically proposed mean and units ratio estimators having certain properties were compared numerically. Random sampling was performed using the SPSS 20 program thus giving an equal chance to the units sampled and variability in the population was protected.
Investigation of Some Estimators Via Taylor Series Approach and an Application
doi:10.11648/j.ajtas.20140305.14
American Journal of Theoretical and Applied Statistics
2014-09-20
© Science Publishing Group
Tolga Zaman
Vedat Saglam
Murat Sagir
Erdinc Yucesoy
Mujgan Zobu
Investigation of Some Estimators Via Taylor Series Approach and an Application
3
5
147
147
2014-09-20
2014-09-20
10.11648/j.ajtas.20140305.14
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140305.14
© Science Publishing Group
New Criteria of Model Selection and Model Averaging in Linear Regression Models
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140305.15
Model selection is an important part of any statistical analysis. Many tools are suggested for selecting the best model including frequentist and Bayesian perspectives. There is often a considerable uncertainty in the selection of a particular model to be the best approximating model. Model selection uncertainty arises when the data are used for both model selection and parameter estimation. Bias in estimators of model parameters often arise when data based selection has been done. Therefore, model averaging of the parameter estimators will be done to alleviate the bias in model selection in a set of candidate models, by combining the information from a set of candidate models. This paper is two-fold, new criteria of model selection are proposed based on different averages of AIC, BIC, AICc, and HQC. Also, model averaging is introduced to compare the parameter estimators in model averaging with the ones in model selection. Two Simulation studies are considered, the first is for model selection and showed that the new proposed criteria are lies between some of the known criteria such as AIC, BIC, AICc, and HQC, and so they can be used as new criteria of model selection. The second simulation study is for model averaging and showed that the parameter estimators have less bias and less predicted mean square error (PMSE) compared with the parameter estimators in model selection.
Model selection is an important part of any statistical analysis. Many tools are suggested for selecting the best model including frequentist and Bayesian perspectives. There is often a considerable uncertainty in the selection of a particular model to be the best approximating model. Model selection uncertainty arises when the data are used for both model selection and parameter estimation. Bias in estimators of model parameters often arise when data based selection has been done. Therefore, model averaging of the parameter estimators will be done to alleviate the bias in model selection in a set of candidate models, by combining the information from a set of candidate models. This paper is two-fold, new criteria of model selection are proposed based on different averages of AIC, BIC, AICc, and HQC. Also, model averaging is introduced to compare the parameter estimators in model averaging with the ones in model selection. Two Simulation studies are considered, the first is for model selection and showed that the new proposed criteria are lies between some of the known criteria such as AIC, BIC, AICc, and HQC, and so they can be used as new criteria of model selection. The second simulation study is for model averaging and showed that the parameter estimators have less bias and less predicted mean square error (PMSE) compared with the parameter estimators in model selection.
New Criteria of Model Selection and Model Averaging in Linear Regression Models
doi:10.11648/j.ajtas.20140305.15
American Journal of Theoretical and Applied Statistics
2014-10-15
© Science Publishing Group
Magda Mohamed Mohamed Haggag
New Criteria of Model Selection and Model Averaging in Linear Regression Models
3
5
166
166
2014-10-15
2014-10-15
10.11648/j.ajtas.20140305.15
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140305.15
© Science Publishing Group
Assessing the Lifetime Performance Index Using Exponentiated Frechet Distribution with the Progressive First-Failure-Censoring Scheme
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140306.11
Process capability analysis has been widely used to monitor the performance of industrial processes. In practice, lifetime performance index C_L is a popular means to assess the performance and potential of their processes, where L is the lower specification limit. This study constructs the maximum likelihood (ML) and the Bayesian estimators of C_L for the exponentiated Frechet (EF) model with progressive first-failure-censoring scheme. These estimates are then used for constructing a confidence interval for C_L. The MLE and the Bayesian estimators of C_L are then utilized to develop a new hypothesis testing procedure in the condition of known L. Finally, we give a practical example and the Monte Carlo simulation study to illustrate the use of the testing procedure under given significance level.
Process capability analysis has been widely used to monitor the performance of industrial processes. In practice, lifetime performance index C_L is a popular means to assess the performance and potential of their processes, where L is the lower specification limit. This study constructs the maximum likelihood (ML) and the Bayesian estimators of C_L for the exponentiated Frechet (EF) model with progressive first-failure-censoring scheme. These estimates are then used for constructing a confidence interval for C_L. The MLE and the Bayesian estimators of C_L are then utilized to develop a new hypothesis testing procedure in the condition of known L. Finally, we give a practical example and the Monte Carlo simulation study to illustrate the use of the testing procedure under given significance level.
Assessing the Lifetime Performance Index Using Exponentiated Frechet Distribution with the Progressive First-Failure-Censoring Scheme
doi:10.11648/j.ajtas.20140306.11
American Journal of Theoretical and Applied Statistics
2014-10-17
© Science Publishing Group
Ahmed Abo-Elmagd Soliman
Essam Al-Sayed Ahmed
Ahmed Hamed Abd Ellah
Al-Wageh Ahmed Farghal
Assessing the Lifetime Performance Index Using Exponentiated Frechet Distribution with the Progressive First-Failure-Censoring Scheme
3
6
176
176
2014-10-17
2014-10-17
10.11648/j.ajtas.20140306.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140306.11
© Science Publishing Group
Statistical Analysis of Domestic Price Volatility of Sugar in Ethiopia
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140306.12
The aim of this study was to model and identify determinants of monthly domestic price volatility of sugar in Ethiopia over the study period from December 2001 to December 2011 GC. The volatility in the domestic price of Sugar has been found to vary over months suggesting the use of GARCH family approach. Thus, family of special characteristics of time series models, namely ARCH, GARCH, TGARCH and EGARCH models with ARIMA mean equations were fitted to the data. The best fitting model among each family of models was selected based on how well the model captures the variation in the data and the optimal lag specification accessed via AIC and SBIC. Comparisons of the symmetric and asymmetric model were carried out based on the significance of asymmetric term in TGARCH and EGARCH models. The analysis showed that: statistically significance asymmetric term and least forecast error from the model established that EGARCH model with Student-t distributional assumptions for residual were superior to the GARCH and TGARCH models. Therefore, ARIMA (0,0,2)-EGARCH(1,3) with Student-t were chosen to be the best fitting models for monthly domestic price volatility of Sugar. Moreover, it was found that from candidate explanatory variables, import price for sugar, fuel oil price, exchange rate (dollar-birr), general inflation, inflation for non food items, inflation for food items, past shock, and volatility on monthly domestic price had statistically significant effect on the current month domestic price volatility on sugar.
The aim of this study was to model and identify determinants of monthly domestic price volatility of sugar in Ethiopia over the study period from December 2001 to December 2011 GC. The volatility in the domestic price of Sugar has been found to vary over months suggesting the use of GARCH family approach. Thus, family of special characteristics of time series models, namely ARCH, GARCH, TGARCH and EGARCH models with ARIMA mean equations were fitted to the data. The best fitting model among each family of models was selected based on how well the model captures the variation in the data and the optimal lag specification accessed via AIC and SBIC. Comparisons of the symmetric and asymmetric model were carried out based on the significance of asymmetric term in TGARCH and EGARCH models. The analysis showed that: statistically significance asymmetric term and least forecast error from the model established that EGARCH model with Student-t distributional assumptions for residual were superior to the GARCH and TGARCH models. Therefore, ARIMA (0,0,2)-EGARCH(1,3) with Student-t were chosen to be the best fitting models for monthly domestic price volatility of Sugar. Moreover, it was found that from candidate explanatory variables, import price for sugar, fuel oil price, exchange rate (dollar-birr), general inflation, inflation for non food items, inflation for food items, past shock, and volatility on monthly domestic price had statistically significant effect on the current month domestic price volatility on sugar.
Statistical Analysis of Domestic Price Volatility of Sugar in Ethiopia
doi:10.11648/j.ajtas.20140306.12
American Journal of Theoretical and Applied Statistics
2014-10-24
© Science Publishing Group
Anteneh Asmare Godana
Yibeltal Arega Ashebir
Tewodros Getinet Yirtaw
Statistical Analysis of Domestic Price Volatility of Sugar in Ethiopia
3
6
183
183
2014-10-24
2014-10-24
10.11648/j.ajtas.20140306.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140306.12
© Science Publishing Group
An Inquiry into the Distributional Properties of Reliability Rate
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140306.14
The present paper attempts to model the maximum likelihood estimation of reliability rate and the related statistical properties. Reliability in general refers to the probability that a component or system is able to perform its function satisfactorily during a specific period under normal operating conditions. It is estimated as the fraction of time the unit/system is available for operation. For practical purposes, reliability rate is usually estimated using maximum likelihood estimator (MLE) from sample observations. No study has gone beyond this to analyze the statistical properties of the MLE of reliability rate; the present study is an attempt at such an inquiry. We derive the density function of reliability rate and also the moments; however, it is found that an evaluation of these two moments is very difficult as the series converge very slowly.
The present paper attempts to model the maximum likelihood estimation of reliability rate and the related statistical properties. Reliability in general refers to the probability that a component or system is able to perform its function satisfactorily during a specific period under normal operating conditions. It is estimated as the fraction of time the unit/system is available for operation. For practical purposes, reliability rate is usually estimated using maximum likelihood estimator (MLE) from sample observations. No study has gone beyond this to analyze the statistical properties of the MLE of reliability rate; the present study is an attempt at such an inquiry. We derive the density function of reliability rate and also the moments; however, it is found that an evaluation of these two moments is very difficult as the series converge very slowly.
An Inquiry into the Distributional Properties of Reliability Rate
doi:10.11648/j.ajtas.20140306.14
American Journal of Theoretical and Applied Statistics
2014-11-14
© Science Publishing Group
Vijayamohanan Pillai N.
An Inquiry into the Distributional Properties of Reliability Rate
3
6
201
201
2014-11-14
2014-11-14
10.11648/j.ajtas.20140306.14
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140306.14
© Science Publishing Group
Modeling Export Price of Tea in Kenya: Comparison of Artificial Neural Network and Seasonal Autoregressive Integrated Moving Average
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140306.16
Agriculture sector is a key driver of economic growth in Kenya. It remains the main source of livelihood for the majority of the Kenyan people. Tea, coffee, and horticulture are the main agricultural exports in Kenya. Export price of these commodities fluctuates mainly due to law of demand and supply. Other reasons include; quality of goods and inflation effect on the dollar or other hard currencies. Further, farmers and their cooperative societies are affected by the local foreign exchange. The government and other stake holders require prior information on price trends for ease of planning. Thus it is important to forecast export prices of these commodities. The purpose of this study is to compare the forecasting performance of artificial neural network (ANN) model and a SARIMA model in export price of tea in Kenya. Secondary data was obtained from Kenya National Bureau of Statistics (KNBS). A total of 185 monthly export prices were obtained. A three layer feed-forward artificial neural network was trained using 70% of the data. The ANN model obtained was used to predict export prices for the remaining 30% of the data. SARIMA model was also used to predict export prices for the same duration. Forecasting performance was evaluated using Root mean squared errors (RMSE), mean absolute error (MAE) and mean absolute percentage error (MAPE). ANN demonstrated a superior performance over SARIMA model. The authors' ANN has high performance compared to SARIMA and can accurately predict export price of tea.
Agriculture sector is a key driver of economic growth in Kenya. It remains the main source of livelihood for the majority of the Kenyan people. Tea, coffee, and horticulture are the main agricultural exports in Kenya. Export price of these commodities fluctuates mainly due to law of demand and supply. Other reasons include; quality of goods and inflation effect on the dollar or other hard currencies. Further, farmers and their cooperative societies are affected by the local foreign exchange. The government and other stake holders require prior information on price trends for ease of planning. Thus it is important to forecast export prices of these commodities. The purpose of this study is to compare the forecasting performance of artificial neural network (ANN) model and a SARIMA model in export price of tea in Kenya. Secondary data was obtained from Kenya National Bureau of Statistics (KNBS). A total of 185 monthly export prices were obtained. A three layer feed-forward artificial neural network was trained using 70% of the data. The ANN model obtained was used to predict export prices for the remaining 30% of the data. SARIMA model was also used to predict export prices for the same duration. Forecasting performance was evaluated using Root mean squared errors (RMSE), mean absolute error (MAE) and mean absolute percentage error (MAPE). ANN demonstrated a superior performance over SARIMA model. The authors' ANN has high performance compared to SARIMA and can accurately predict export price of tea.
Modeling Export Price of Tea in Kenya: Comparison of Artificial Neural Network and Seasonal Autoregressive Integrated Moving Average
doi:10.11648/j.ajtas.20140306.16
American Journal of Theoretical and Applied Statistics
2014-12-23
© Science Publishing Group
Mbiriri Ikonya
Peter Mwita
Anthony Wanjoya
Modeling Export Price of Tea in Kenya: Comparison of Artificial Neural Network and Seasonal Autoregressive Integrated Moving Average
3
6
216
216
2014-12-23
2014-12-23
10.11648/j.ajtas.20140306.16
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140306.16
© Science Publishing Group
Probability Model of Forward Birth Interval and Its Application
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140306.18
In renewal theory approach, it is well known that the limiting forms of the probability density function of backward recurrence time and forward recurrence time which are similar to open birth interval and forward birth interval are identical on the assumption that the renewal densities do not change over time. The forward birth interval defined as the time between the survey date and the date of next birth posterior to the survey date. Forward birth interval is a good index for current change in fertility behavior. The present model has been derived on the assumption that females are not exposed to the risk of conception immediately after the termination of Post-Partum Amenorrhea (PPA). However they may be exposed to the risk of conception at different point of time after the termination of PPA because of some socio-cultural factors or contraceptive practices. In this probability model for forward birth interval regardless of parity assuming that renewal density does not change over time and females are exposed to the risk of conception at different point of time. In this model, fecundability (λ) and the duration of time from the point of termination of PPA to the state of exposure as random variable (µ) which follows exponential distribution. The maximum likelihood estimation technique has been used for the estimation of parameters λ and µ through derived model. The estimated values of λ and µ are 1.1051 and 2.841 respectively. The variance of estimated λ and µ are 0.067 and 0.79 respectively. The co-variance in between estimated λ and µ is -0.026.With these estimates the expected frequencies for the distribution and χ2 = 0.6057 is highly significant. Thus, the derived probability model explains the fertility behavior of observed data satisfactorily well.
In renewal theory approach, it is well known that the limiting forms of the probability density function of backward recurrence time and forward recurrence time which are similar to open birth interval and forward birth interval are identical on the assumption that the renewal densities do not change over time. The forward birth interval defined as the time between the survey date and the date of next birth posterior to the survey date. Forward birth interval is a good index for current change in fertility behavior. The present model has been derived on the assumption that females are not exposed to the risk of conception immediately after the termination of Post-Partum Amenorrhea (PPA). However they may be exposed to the risk of conception at different point of time after the termination of PPA because of some socio-cultural factors or contraceptive practices. In this probability model for forward birth interval regardless of parity assuming that renewal density does not change over time and females are exposed to the risk of conception at different point of time. In this model, fecundability (λ) and the duration of time from the point of termination of PPA to the state of exposure as random variable (µ) which follows exponential distribution. The maximum likelihood estimation technique has been used for the estimation of parameters λ and µ through derived model. The estimated values of λ and µ are 1.1051 and 2.841 respectively. The variance of estimated λ and µ are 0.067 and 0.79 respectively. The co-variance in between estimated λ and µ is -0.026.With these estimates the expected frequencies for the distribution and χ2 = 0.6057 is highly significant. Thus, the derived probability model explains the fertility behavior of observed data satisfactorily well.
Probability Model of Forward Birth Interval and Its Application
doi:10.11648/j.ajtas.20140306.18
American Journal of Theoretical and Applied Statistics
2015-01-06
© Science Publishing Group
Ajay Shankar Singh
Probability Model of Forward Birth Interval and Its Application
3
6
227
227
2015-01-06
2015-01-06
10.11648/j.ajtas.20140306.18
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140306.18
© Science Publishing Group
Estimation of Parameters of the Kumaraswamy Distribution Based on General Progressive Type II Censoring
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140306.17
In this paper, we produced a study in Estimation for parameters of the Kumaraswamy distribution based on general progressive type II censoring. These estimates are derived using the maximum likelihood and Bayesian approaches. In the Bayesian approach, the two parameters are assumed to be random variables and estimators for the parameters are obtained using the well known squared error loss (SEL) function. The findings are illustrated with actual and computer generated data.
In this paper, we produced a study in Estimation for parameters of the Kumaraswamy distribution based on general progressive type II censoring. These estimates are derived using the maximum likelihood and Bayesian approaches. In the Bayesian approach, the two parameters are assumed to be random variables and estimators for the parameters are obtained using the well known squared error loss (SEL) function. The findings are illustrated with actual and computer generated data.
Estimation of Parameters of the Kumaraswamy Distribution Based on General Progressive Type II Censoring
doi:10.11648/j.ajtas.20140306.17
American Journal of Theoretical and Applied Statistics
2014-12-29
© Science Publishing Group
Mostafa Mohie Eldin
Nora Khalil
Montaser Amein
Estimation of Parameters of the Kumaraswamy Distribution Based on General Progressive Type II Censoring
3
6
222
222
2014-12-29
2014-12-29
10.11648/j.ajtas.20140306.17
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140306.17
© Science Publishing Group
Bayesian Inference for the Left Truncated Exponential Distribution Based on Pooled Type-II Censored Samples
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140306.15
In this paper, the maximum likelihood and Bayesian estimations are developed based on the pooled sample of two independent Type-II censored samples from the left truncated exponential distribution. The Bayesian estimation is discussed using different loss functions. The problem of predicting the failure times from a future sample from the sample population is also discussed from a Bayesian viewpoint. A Monte Carlo simulation study is conducted to compare the maximum likelihood estimator with the Bayesian estimators. Finally, an illustrative example is presented to demonstrate the different inference methods discussed here.
In this paper, the maximum likelihood and Bayesian estimations are developed based on the pooled sample of two independent Type-II censored samples from the left truncated exponential distribution. The Bayesian estimation is discussed using different loss functions. The problem of predicting the failure times from a future sample from the sample population is also discussed from a Bayesian viewpoint. A Monte Carlo simulation study is conducted to compare the maximum likelihood estimator with the Bayesian estimators. Finally, an illustrative example is presented to demonstrate the different inference methods discussed here.
Bayesian Inference for the Left Truncated Exponential Distribution Based on Pooled Type-II Censored Samples
doi:10.11648/j.ajtas.20140306.15
American Journal of Theoretical and Applied Statistics
2014-12-03
© Science Publishing Group
Mustafa Mohie El-Din
Yahia Abdel-Aty
Ahmed Shafay
Magdy Nagy
Bayesian Inference for the Left Truncated Exponential Distribution Based on Pooled Type-II Censored Samples
3
6
210
210
2014-12-03
2014-12-03
10.11648/j.ajtas.20140306.15
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140306.15
© Science Publishing Group
Causal Inference in Block-Randomized Experiments: Analysis Based on Neyman’s Stochastic Causal Model
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140306.13
In Neyman’s causal model (NCM), each unit included in a two-arm randomized experiment has a pair of potential outcomes – one outcome would be observed under treatment, the other under control. In the stochastic version of NCM, both potential outcomes are viewed as (possibly) non-degenerate random variables to allow for stochastic effects of post-interventional factors, such as random measurement error. The unit-level treatment effect is the expected outcome under treatment minus that under control. The sample average treatment effect (SATE) is the mean of the unit-level effects in the set of all units included in the experiment. The population average treatment effect (PATE) is the mean of the unit-level effects defined on the population of all units eligible for experiment. The purpose of this paper is to develop a non-parametric theory of inference about SATE and PATE in block-randomized experiments, using the mathematical formalism of stochastic NCM. Inference about SATE is examined under randomization distribution without a probability model for selection of units into experiment. For inference about PATE two probability models for selection are considered: (1) simple random sampling and (2) stratified random sampling, each followed by block-randomized treatment assignment. It is shown that under these conditions, the ordinary “difference of means” estimator (mean observed outcome under treatment minus that under control) is consistent for both SATE and PATE. The variance of this estimator is derived under randomization distribution alone and under simple random sampling and stratified random sampling as selection models. Variance estimators producing confidence intervals with nominal asymptotic coverage for PATE under these selection models are proposed, and it is also shown that these intervals have at least nominal asymptotic coverage for SATE under randomization distribution. Thus, when a selection model is correctly specified, the proposed methods guarantee valid inference about both SATE and PATE, while under incorrect specification of the selection model valid inference about SATE is still guaranteed. This is an important property because in many types of randomized experiments (e.g., clinical trials in medicine), units are not selected into experiment by a mechanism with known selection probabilities.
In Neyman’s causal model (NCM), each unit included in a two-arm randomized experiment has a pair of potential outcomes – one outcome would be observed under treatment, the other under control. In the stochastic version of NCM, both potential outcomes are viewed as (possibly) non-degenerate random variables to allow for stochastic effects of post-interventional factors, such as random measurement error. The unit-level treatment effect is the expected outcome under treatment minus that under control. The sample average treatment effect (SATE) is the mean of the unit-level effects in the set of all units included in the experiment. The population average treatment effect (PATE) is the mean of the unit-level effects defined on the population of all units eligible for experiment. The purpose of this paper is to develop a non-parametric theory of inference about SATE and PATE in block-randomized experiments, using the mathematical formalism of stochastic NCM. Inference about SATE is examined under randomization distribution without a probability model for selection of units into experiment. For inference about PATE two probability models for selection are considered: (1) simple random sampling and (2) stratified random sampling, each followed by block-randomized treatment assignment. It is shown that under these conditions, the ordinary “difference of means” estimator (mean observed outcome under treatment minus that under control) is consistent for both SATE and PATE. The variance of this estimator is derived under randomization distribution alone and under simple random sampling and stratified random sampling as selection models. Variance estimators producing confidence intervals with nominal asymptotic coverage for PATE under these selection models are proposed, and it is also shown that these intervals have at least nominal asymptotic coverage for SATE under randomization distribution. Thus, when a selection model is correctly specified, the proposed methods guarantee valid inference about both SATE and PATE, while under incorrect specification of the selection model valid inference about SATE is still guaranteed. This is an important property because in many types of randomized experiments (e.g., clinical trials in medicine), units are not selected into experiment by a mechanism with known selection probabilities.
Causal Inference in Block-Randomized Experiments: Analysis Based on Neyman’s Stochastic Causal Model
doi:10.11648/j.ajtas.20140306.13
American Journal of Theoretical and Applied Statistics
2014-11-06
© Science Publishing Group
Emil Scosyrev
Causal Inference in Block-Randomized Experiments: Analysis Based on Neyman’s Stochastic Causal Model
3
6
196
196
2014-11-06
2014-11-06
10.11648/j.ajtas.20140306.13
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20140306.13
© Science Publishing Group
Defining Quality and Maturity Level Applying the Grey System and the Method for Automotive Enterprises Diagnosis
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.s.2014030601.13
This study intends to diagnose the maturity level of companies certified by the Automotive Quality Management System ISO TS 16949: 2009. Thus, it analyzes if those are considered World Class Organizations (WCO), identifying its management strengths and weaknesses, to provide input for improvement opportunities in their systems. This study methodology applies the Questionnaire Benchmarking Industrial from the Institute Euvaldo Lodi of Santa Catarina (IEL / SC) using the Method for Enterprises Diagnosis (MED). Therefore, it was performed using quantitative analysis regarding the degree of companies’ maturity, applying the Grey Correlation Analysis System. Considering this research limitations and implications classify, its results are classified as exploratory. Future research may focus and study the correlation between a greater number of companies certified by ISO TS. Broad-based and larger sample size would provide a better picture for ISO TS and each organizations maturity state. This study value lies in the ability of diagnosing the organization maturity level, applying MED, Industrial Benchmarking and Grey System tools. Those allowed to define weaknesses and strengths of each organization in analysis. The study resulted in the identification of a systematic way to guide new projects and initiatives, to support and develop strategic planning and to identify how organizations are establishing world class standards.
This study intends to diagnose the maturity level of companies certified by the Automotive Quality Management System ISO TS 16949: 2009. Thus, it analyzes if those are considered World Class Organizations (WCO), identifying its management strengths and weaknesses, to provide input for improvement opportunities in their systems. This study methodology applies the Questionnaire Benchmarking Industrial from the Institute Euvaldo Lodi of Santa Catarina (IEL / SC) using the Method for Enterprises Diagnosis (MED). Therefore, it was performed using quantitative analysis regarding the degree of companies’ maturity, applying the Grey Correlation Analysis System. Considering this research limitations and implications classify, its results are classified as exploratory. Future research may focus and study the correlation between a greater number of companies certified by ISO TS. Broad-based and larger sample size would provide a better picture for ISO TS and each organizations maturity state. This study value lies in the ability of diagnosing the organization maturity level, applying MED, Industrial Benchmarking and Grey System tools. Those allowed to define weaknesses and strengths of each organization in analysis. The study resulted in the identification of a systematic way to guide new projects and initiatives, to support and develop strategic planning and to identify how organizations are establishing world class standards.
Defining Quality and Maturity Level Applying the Grey System and the Method for Automotive Enterprises Diagnosis
doi:10.11648/j.ajtas.s.2014030601.13
American Journal of Theoretical and Applied Statistics
2014-12-27
© Science Publishing Group
Robisom Damasceno Calado
Messias Borges Silva
Angela Alice Silva Boa Sorte Oliveira
Gabriela Salim Spagnol
Alice Sarantopoulos
Li Min Li
Defining Quality and Maturity Level Applying the Grey System and the Method for Automotive Enterprises Diagnosis
3
6
34
34
2014-12-27
2014-12-27
10.11648/j.ajtas.s.2014030601.13
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.s.2014030601.13
© Science Publishing Group
Taguchi Orthogonal Array Combined with Monte Carlo Simulation in the Optimization of Wastewater Treatment
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.s.2014030601.12
In this work was performed a Monte Carlo Simulation for a mathematics model to experimental planning of Taguchi. The software has enabled an upgrading of the variables planning value of 54,26% of TOC, 53,28% of DQO and 6,58% of Total Phenols, that feature the importance of the method for the experimental optimizations, and thereafter reduction of experiments to be realized in the job first step.
In this work was performed a Monte Carlo Simulation for a mathematics model to experimental planning of Taguchi. The software has enabled an upgrading of the variables planning value of 54,26% of TOC, 53,28% of DQO and 6,58% of Total Phenols, that feature the importance of the method for the experimental optimizations, and thereafter reduction of experiments to be realized in the job first step.
Taguchi Orthogonal Array Combined with Monte Carlo Simulation in the Optimization of Wastewater Treatment
doi:10.11648/j.ajtas.s.2014030601.12
American Journal of Theoretical and Applied Statistics
2014-12-27
© Science Publishing Group
Ana Paula Barbosa Rodrigues de Freitas
Leandro Valim de Freitas
Carla Cristina Almeida Loures
Aneirson Francisco da Silva
Lúcio Gualiato Gonçalves
Messias Borges Silva
Taguchi Orthogonal Array Combined with Monte Carlo Simulation in the Optimization of Wastewater Treatment
3
6
22
22
2014-12-27
2014-12-27
10.11648/j.ajtas.s.2014030601.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.s.2014030601.12
© Science Publishing Group
Public Perception of Science: Mapping the Concepts of Brazilian Undergraduate Students of the State of Sao Paulo through Structural Equation Modeling
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.s.2014030601.11
Once modern society depends in large-scale of scientific development, the degree of association between scientific knowledge and attitudes toward science has historical, social and political implications. In this sense, it becomes crucial to analyze the public attitudes regarding to science as these are related to the changing context of scientific practices and their implications for practical problems. Thus, we developed a survey instrument that allowed us to assess the causal relationships and correlations between conceptions, attitudes and socio-demographic factors in relation to science, using as a mediator theme the genetic engineering. Among the socio-demographic factors are included: gender, age, income, religion, schooling, consumption of information provided by the media, perception of knowledge and personal experience. For the composition of the sample, students from various undergraduate courses from public and private institutions were selected. The data were analyzed quantitatively by structural equation modeling. The results show that the conceptions that people have about science directly and positively influence their attitudes towards science. The social factors have their weight, but on a much smaller scale.
Once modern society depends in large-scale of scientific development, the degree of association between scientific knowledge and attitudes toward science has historical, social and political implications. In this sense, it becomes crucial to analyze the public attitudes regarding to science as these are related to the changing context of scientific practices and their implications for practical problems. Thus, we developed a survey instrument that allowed us to assess the causal relationships and correlations between conceptions, attitudes and socio-demographic factors in relation to science, using as a mediator theme the genetic engineering. Among the socio-demographic factors are included: gender, age, income, religion, schooling, consumption of information provided by the media, perception of knowledge and personal experience. For the composition of the sample, students from various undergraduate courses from public and private institutions were selected. The data were analyzed quantitatively by structural equation modeling. The results show that the conceptions that people have about science directly and positively influence their attitudes towards science. The social factors have their weight, but on a much smaller scale.
Public Perception of Science: Mapping the Concepts of Brazilian Undergraduate Students of the State of Sao Paulo through Structural Equation Modeling
doi:10.11648/j.ajtas.s.2014030601.11
American Journal of Theoretical and Applied Statistics
2014-12-27
© Science Publishing Group
Fernanda de Oliveira Simon
Estéfano Vizconde Veraszto
José Tarcísio Franco de Camargo
Dirceu da Silva
Leandro Valim de Freitas
Nonato Assis de Miranda
Public Perception of Science: Mapping the Concepts of Brazilian Undergraduate Students of the State of Sao Paulo through Structural Equation Modeling
3
6
18
18
2014-12-27
2014-12-27
10.11648/j.ajtas.s.2014030601.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.s.2014030601.11
© Science Publishing Group
Response Surface Method and Taguchi Orthogonal Array Applied to Phenolic Wastewater by Advanced Oxidation Process (AOP)
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.s.2014030601.14
The advanced oxidation process was conducted using the Design of Experiments in this work. Initially, Taguchi’s Orthogonal Array L16 (Photo-Fenton and Ozone) was applied, with which it was obtained 29.07% TOC removal. Then, the process was optimized with the Photo-Fenton RSM, thus it was achieved the highest percentage of TOC removal= 54.68%. This condition is associated with a mass ratio of hydrogen peroxide and ferrous ions of eight, which corresponds to 47,8g H2O2 and 5,95g Fe+2.
The advanced oxidation process was conducted using the Design of Experiments in this work. Initially, Taguchi’s Orthogonal Array L16 (Photo-Fenton and Ozone) was applied, with which it was obtained 29.07% TOC removal. Then, the process was optimized with the Photo-Fenton RSM, thus it was achieved the highest percentage of TOC removal= 54.68%. This condition is associated with a mass ratio of hydrogen peroxide and ferrous ions of eight, which corresponds to 47,8g H2O2 and 5,95g Fe+2.
Response Surface Method and Taguchi Orthogonal Array Applied to Phenolic Wastewater by Advanced Oxidation Process (AOP)
doi:10.11648/j.ajtas.s.2014030601.14
American Journal of Theoretical and Applied Statistics
2014-12-31
© Science Publishing Group
Ana Paula Barbosa Rodrigues de Freitas
Leandro Valim de Freitas
Carla Cristina Almeida Loures
Lúcio Gualiato Gonçalves
Messias Borges Silva
Response Surface Method and Taguchi Orthogonal Array Applied to Phenolic Wastewater by Advanced Oxidation Process (AOP)
3
6
41
41
2014-12-31
2014-12-31
10.11648/j.ajtas.s.2014030601.14
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.s.2014030601.14
© Science Publishing Group
The Use of Advanced Oxidation Processes (AOPs) in Dairy Effluent Treatment
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.s.2014030601.15
There are several studies in this same sense aiming to develop new processes and new technologies to minimize the volume and toxicity of effluents generated by chemical industries. Currently, technology advances in effluent and residues treatment is based on environmental sustainability (residual reuse) and in the degradation of pollutant substances with easier natural degradation. Technology has improved, Advanced Oxidation Processes (AOPs) for instance, being based on the generation of hydroxyl radicals as oxidant agent, seems to have great efficiency in the treatment of industrial detriments promoting the reduction of coloration in effluents and environmental decontamination. This project represents the efforts in making the effluent treatment of dairy industries using AOPs (Fenton Reagent, UV), produced by YAKULT, unit of Lorena, State of Sao Paulo viable. There have been evaluated, in the bench level, in batch process, different parameters such as time of exposition to UV radiation, pH range, temperature and different Fenton Reagent concentration. Due to these procedures, the reduction of COD was the main control variable in the results provided by the experiments, with 94.17% of organic matter degradation in the effluent.
There are several studies in this same sense aiming to develop new processes and new technologies to minimize the volume and toxicity of effluents generated by chemical industries. Currently, technology advances in effluent and residues treatment is based on environmental sustainability (residual reuse) and in the degradation of pollutant substances with easier natural degradation. Technology has improved, Advanced Oxidation Processes (AOPs) for instance, being based on the generation of hydroxyl radicals as oxidant agent, seems to have great efficiency in the treatment of industrial detriments promoting the reduction of coloration in effluents and environmental decontamination. This project represents the efforts in making the effluent treatment of dairy industries using AOPs (Fenton Reagent, UV), produced by YAKULT, unit of Lorena, State of Sao Paulo viable. There have been evaluated, in the bench level, in batch process, different parameters such as time of exposition to UV radiation, pH range, temperature and different Fenton Reagent concentration. Due to these procedures, the reduction of COD was the main control variable in the results provided by the experiments, with 94.17% of organic matter degradation in the effluent.
The Use of Advanced Oxidation Processes (AOPs) in Dairy Effluent Treatment
doi:10.11648/j.ajtas.s.2014030601.15
American Journal of Theoretical and Applied Statistics
2015-01-10
© Science Publishing Group
Carla Cristina Almeida Loures
Gisella Rossana Lamas Samanamud
Ana Paula Barbosa Rodrigues de Freitas
Ivy S. Oliveira
Leandro Valim de Freitas
Carlos Roberto de Oliveira Almeida
The Use of Advanced Oxidation Processes (AOPs) in Dairy Effluent Treatment
3
6
46
46
2015-01-10
2015-01-10
10.11648/j.ajtas.s.2014030601.15
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.s.2014030601.15
© Science Publishing Group
Didactic Tool Applied into Data Collection and Variability Study in a Process
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.s.2014030601.16
The objective of this paper is to present an application of design of experiments in which students learn how to get real data with application to a case using the catapult, and generate their statistical analysis through software, in order to have a great reliability, at the work development. The variation factors are selected between maximum and minimum levels accepted by catapult. The experimenting has showed. The results of the experiment are collected connected to the desired range, they are presented in tables and interaction graphs and Pareto graph. Doing the experiments it has been showed that not all variables of the catapult initially considered affect the quality of the result of the experiment. That is, for adjusting the bands considered only one factor has a significant effect on the quality of the experiment, it can be stated that there is no need to set a specific value of the catapult, but rather a range of values within which the experiment will have good performance.
The objective of this paper is to present an application of design of experiments in which students learn how to get real data with application to a case using the catapult, and generate their statistical analysis through software, in order to have a great reliability, at the work development. The variation factors are selected between maximum and minimum levels accepted by catapult. The experimenting has showed. The results of the experiment are collected connected to the desired range, they are presented in tables and interaction graphs and Pareto graph. Doing the experiments it has been showed that not all variables of the catapult initially considered affect the quality of the result of the experiment. That is, for adjusting the bands considered only one factor has a significant effect on the quality of the experiment, it can be stated that there is no need to set a specific value of the catapult, but rather a range of values within which the experiment will have good performance.
Didactic Tool Applied into Data Collection and Variability Study in a Process
doi:10.11648/j.ajtas.s.2014030601.16
American Journal of Theoretical and Applied Statistics
2015-02-08
© Science Publishing Group
Thiago De Camargo Leite Labastie
Carlos Alberto Chaves
Antonio Faria Neto
Wendell De Queiroz Lamas
Luiz Fernando Fiorio
Helena Barros Fiorio
Didactic Tool Applied into Data Collection and Variability Study in a Process
3
6
57
57
2015-02-08
2015-02-08
10.11648/j.ajtas.s.2014030601.16
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.s.2014030601.16
© Science Publishing Group
Order Statistics from Non-Identical Standard Type II Generalized Logistic Variables and Applications at Moments
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150401.11
In this paper the moment generating function of order statistics arising from independent non-identically distributed (INID) Standard type II Generalized logistic (SGLII) variables is established. A recurrence relation for all moments of all order statistics arising from INID SGLII is computed. Special cases for moments are deduced using polygamma function. Some numerical examples are given.
In this paper the moment generating function of order statistics arising from independent non-identically distributed (INID) Standard type II Generalized logistic (SGLII) variables is established. A recurrence relation for all moments of all order statistics arising from INID SGLII is computed. Special cases for moments are deduced using polygamma function. Some numerical examples are given.
Order Statistics from Non-Identical Standard Type II Generalized Logistic Variables and Applications at Moments
doi:10.11648/j.ajtas.20150401.11
American Journal of Theoretical and Applied Statistics
2015-01-16
© Science Publishing Group
Z. AL-Saiary
Order Statistics from Non-Identical Standard Type II Generalized Logistic Variables and Applications at Moments
4
1
5
5
2015-01-16
2015-01-16
10.11648/j.ajtas.20150401.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150401.11
© Science Publishing Group
The Design of Experiment Application (DOE) in the Beneficiation of Cashew Chestnut in Northeastern Brazil
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150401.12
Brazil is one of the world´s leaders in the production and processing of cashew chestnut and 100% of these cashew chestnut processing industries are located in the northeastern region of the country. For the maintenance and enlargement of the cashew chestnut market it is necessary to have a guarantee of the product quality by means of controlling the productive process. In this case, the application of DOE (Design of Experiments) is suggested in the beneficiation process of the cashew chestnut, notably in the stage of decortication, where the chestnuts are being cut in bands, by a mechanical means. For this process, a fractionated factorial experiment planning was used and evaluated response variable in the experiment was the quality of the almond in the final stage of production, measured by the percentage of whole almonds after the separation from the barks. The chosen process factors were the almonds size, the humidification of the environment, the temperature of the environment before the decorticator and the velocity of the decorticator. At the end of the experiment, it was observed that DOE showed to be an applicable tool that indicates which factors showed to be more influential, as well as, their levels of adjustment. It was observed that the variables related to the size of the almonds, the velocity in decortication are the influential factors of production in this process, apart from a strong noise being identified in this process, observed by the strong variance in the experiment data, especially that of the response variable.
Brazil is one of the world´s leaders in the production and processing of cashew chestnut and 100% of these cashew chestnut processing industries are located in the northeastern region of the country. For the maintenance and enlargement of the cashew chestnut market it is necessary to have a guarantee of the product quality by means of controlling the productive process. In this case, the application of DOE (Design of Experiments) is suggested in the beneficiation process of the cashew chestnut, notably in the stage of decortication, where the chestnuts are being cut in bands, by a mechanical means. For this process, a fractionated factorial experiment planning was used and evaluated response variable in the experiment was the quality of the almond in the final stage of production, measured by the percentage of whole almonds after the separation from the barks. The chosen process factors were the almonds size, the humidification of the environment, the temperature of the environment before the decorticator and the velocity of the decorticator. At the end of the experiment, it was observed that DOE showed to be an applicable tool that indicates which factors showed to be more influential, as well as, their levels of adjustment. It was observed that the variables related to the size of the almonds, the velocity in decortication are the influential factors of production in this process, apart from a strong noise being identified in this process, observed by the strong variance in the experiment data, especially that of the response variable.
The Design of Experiment Application (DOE) in the Beneficiation of Cashew Chestnut in Northeastern Brazil
doi:10.11648/j.ajtas.20150401.12
American Journal of Theoretical and Applied Statistics
2015-01-19
© Science Publishing Group
Miriam Karla Rocha
Liane Márcia Freitas Silva
Alexandre José de Oliveira
André Lucena Duarte
Adrícia Fonseca Mendes
Messias Borges Silva
The Design of Experiment Application (DOE) in the Beneficiation of Cashew Chestnut in Northeastern Brazil
4
1
14
14
2015-01-19
2015-01-19
10.11648/j.ajtas.20150401.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150401.12
© Science Publishing Group
Multilevel Modeling of Determinants of Fertility Status of Married Women in Ethiopia
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150401.14
The main objective of this study is to investigate the determinant factors of fertility status of married women in Ethiopia and to examine the reasons for variations of fertility across regions of Ethiopia based on data on 7052 married women obtained from Ethiopian Demographic and Health Survey (EDHS, 2011). Multilevel binary logistic regression models on fertility status of married women were employed. This study revealed that the random intercept and fixed slope model fits the data significantly better than the other multilevel logistic regression models. The results confirmed that woman’s education level, sex of household head, being visited by family planning worker last twelve months, child loss experience, woman’s occupation, religion and age of woman at first birth were found to be significant determinants and also contributing factors for variation in fertility status of married women among the regions of Ethiopia. In random intercept model the overall variance of constant term was found to be statistically significant implies that women with the same characteristics in two different regions have different fertility status: that is, there is a clear region effect. In this study multilevel model best fit the data as compared to single level model.
The main objective of this study is to investigate the determinant factors of fertility status of married women in Ethiopia and to examine the reasons for variations of fertility across regions of Ethiopia based on data on 7052 married women obtained from Ethiopian Demographic and Health Survey (EDHS, 2011). Multilevel binary logistic regression models on fertility status of married women were employed. This study revealed that the random intercept and fixed slope model fits the data significantly better than the other multilevel logistic regression models. The results confirmed that woman’s education level, sex of household head, being visited by family planning worker last twelve months, child loss experience, woman’s occupation, religion and age of woman at first birth were found to be significant determinants and also contributing factors for variation in fertility status of married women among the regions of Ethiopia. In random intercept model the overall variance of constant term was found to be statistically significant implies that women with the same characteristics in two different regions have different fertility status: that is, there is a clear region effect. In this study multilevel model best fit the data as compared to single level model.
Multilevel Modeling of Determinants of Fertility Status of Married Women in Ethiopia
doi:10.11648/j.ajtas.20150401.14
American Journal of Theoretical and Applied Statistics
2015-01-21
© Science Publishing Group
Anteneh Mulugeta Eyasu
Multilevel Modeling of Determinants of Fertility Status of Married Women in Ethiopia
4
1
25
25
2015-01-21
2015-01-21
10.11648/j.ajtas.20150401.14
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150401.14
© Science Publishing Group
Forecasting Inflation Rate in Kenya Using SARIMA Model
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150401.13
It is the desire of the policy makers in a country is to have access to reliable forecast of inflation rate. This is achievable if an appropriate model with high predictive accuracy is used. In this paper, Seasonal Autoregressive Integrated Moving Average (SARIMA) model is developed to forecast Kenya's inflation rate using quarterly data for the period 1981 to 2013 obtained from KNBS. SARIMA (0,1,0) (0,0,1)4 was identified as the best model. This was achieved by identifying the model with the least Akaike Information Criterion. The parameters were then estimated through the Maximum Likelihood Estimation method. Diagnostic checks using Jarque-Bera Normality Test indicated that they were normally distributed. ACF and PACF plots for the residuals and squared residuals revealed that they followed a white noise process and were homoskedastic respectively. The predictive ability tests RMSE=0.2871, MAPE=3.9456 and MAE= 0.2369 showed that the model was appropriate for forecasting the inflation rate in Kenya.
It is the desire of the policy makers in a country is to have access to reliable forecast of inflation rate. This is achievable if an appropriate model with high predictive accuracy is used. In this paper, Seasonal Autoregressive Integrated Moving Average (SARIMA) model is developed to forecast Kenya's inflation rate using quarterly data for the period 1981 to 2013 obtained from KNBS. SARIMA (0,1,0) (0,0,1)4 was identified as the best model. This was achieved by identifying the model with the least Akaike Information Criterion. The parameters were then estimated through the Maximum Likelihood Estimation method. Diagnostic checks using Jarque-Bera Normality Test indicated that they were normally distributed. ACF and PACF plots for the residuals and squared residuals revealed that they followed a white noise process and were homoskedastic respectively. The predictive ability tests RMSE=0.2871, MAPE=3.9456 and MAE= 0.2369 showed that the model was appropriate for forecasting the inflation rate in Kenya.
Forecasting Inflation Rate in Kenya Using SARIMA Model
doi:10.11648/j.ajtas.20150401.13
American Journal of Theoretical and Applied Statistics
2015-01-20
© Science Publishing Group
Susan W. Gikungu
Anthony G. Waititu
John M. Kihoro
Forecasting Inflation Rate in Kenya Using SARIMA Model
4
1
18
18
2015-01-20
2015-01-20
10.11648/j.ajtas.20150401.13
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150401.13
© Science Publishing Group
Bayesian Estimation Based on Record Values from Exponentiated Weibull Distribution: An Markov Chain Monte Carlo Approach
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150401.15
In this paper, we consider the Bayes estimators of the unknown parameters of the exponentiated Weibull distribution (EWD) under the assumptions of gamma priors on both shape parameters. Point estimation and confidence intervals based on maximum likelihood and bootstrap methods are proposed. The Bayes estimators cannot be obtained in explicit forms. So we propose Markov chain Monte Carlo (MCMC) techniques to generate samples from the posterior distributions and in turn computing the Bayes estimators. The approximate Bayes estimators obtained under the assumptions of non-informative priors are compared with the maximum likelihood estimators using Monte Carlo simulations. A numerical example is also presented for illustrative purposes.
In this paper, we consider the Bayes estimators of the unknown parameters of the exponentiated Weibull distribution (EWD) under the assumptions of gamma priors on both shape parameters. Point estimation and confidence intervals based on maximum likelihood and bootstrap methods are proposed. The Bayes estimators cannot be obtained in explicit forms. So we propose Markov chain Monte Carlo (MCMC) techniques to generate samples from the posterior distributions and in turn computing the Bayes estimators. The approximate Bayes estimators obtained under the assumptions of non-informative priors are compared with the maximum likelihood estimators using Monte Carlo simulations. A numerical example is also presented for illustrative purposes.
Bayesian Estimation Based on Record Values from Exponentiated Weibull Distribution: An Markov Chain Monte Carlo Approach
doi:10.11648/j.ajtas.20150401.15
American Journal of Theoretical and Applied Statistics
2015-01-23
© Science Publishing Group
Rashad Mohamed El-Sagheer
Bayesian Estimation Based on Record Values from Exponentiated Weibull Distribution: An Markov Chain Monte Carlo Approach
4
1
32
32
2015-01-23
2015-01-23
10.11648/j.ajtas.20150401.15
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150401.15
© Science Publishing Group
Prediction Intervals for Future Order Statistics from Two Independent Sequences
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150401.16
In this article, based on observed X-sequence of independent and identically distribution (iid) continuous random variables, we discuss the problem of predicting future order statistics from a Y-sequence of iid continuous random variables from the same distribution. Specifically, distribution-free prediction intervals (PIs) for an order statistic observation based on either progressive Type-II right censoring, or order data from the past X-sequence, as well as outer and inner PIs are derived based on order statistics observations. Such these intervals are exact and do not depend on the sampling distribution. Finally, a real life time data set that given to breakdown of an insulating fluid between electrodes is used to illustrate the proposed procedures.
In this article, based on observed X-sequence of independent and identically distribution (iid) continuous random variables, we discuss the problem of predicting future order statistics from a Y-sequence of iid continuous random variables from the same distribution. Specifically, distribution-free prediction intervals (PIs) for an order statistic observation based on either progressive Type-II right censoring, or order data from the past X-sequence, as well as outer and inner PIs are derived based on order statistics observations. Such these intervals are exact and do not depend on the sampling distribution. Finally, a real life time data set that given to breakdown of an insulating fluid between electrodes is used to illustrate the proposed procedures.
Prediction Intervals for Future Order Statistics from Two Independent Sequences
doi:10.11648/j.ajtas.20150401.16
American Journal of Theoretical and Applied Statistics
2015-02-02
© Science Publishing Group
M. M. Mohie El-Din
M. S. Kotb
W. S. Emam
Prediction Intervals for Future Order Statistics from Two Independent Sequences
4
1
40
40
2015-02-02
2015-02-02
10.11648/j.ajtas.20150401.16
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150401.16
© Science Publishing Group
Towards a Successful Startup Company: Best Successful Team Components
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.s.2015040101.12
Entrepreneurship became an important sector in the Arab world. A lot of young entrepreneurs have ambitious projects and creative ideas, which they hope to get fund and incubation to implement these ideas. There are three incubators in Gaza which provide the required incubation, training and fund. Entrepreneurs personality characters have a big effect on the success of their startup companies; moreover, the startup companies category plays a big role on the success of their startup companies especially in small markets such as in Gaza. So we have to find a way to discover which is the most successful ideas and under which category can be classified with paying tight attention for the characters of the team members for each idea. They should have some traits which qualify this team seems to be successful. In the present paper, we are using computing approach based on data mining techniques to study one of the business fields to produce a business technique that helps in extraction the association rules for the incubated startup companies in Gaza. Moreover, we will study these association rules to understand and help the incubators in Gaza to avoid the failed ideas and teams as possible as it could be. Therefore, the incubators will be able to improve the incubation and entrepreneurship sector and increase the number of successful startup companies in Gaza and reduce the wasted fund and time on failed startups.
Entrepreneurship became an important sector in the Arab world. A lot of young entrepreneurs have ambitious projects and creative ideas, which they hope to get fund and incubation to implement these ideas. There are three incubators in Gaza which provide the required incubation, training and fund. Entrepreneurs personality characters have a big effect on the success of their startup companies; moreover, the startup companies category plays a big role on the success of their startup companies especially in small markets such as in Gaza. So we have to find a way to discover which is the most successful ideas and under which category can be classified with paying tight attention for the characters of the team members for each idea. They should have some traits which qualify this team seems to be successful. In the present paper, we are using computing approach based on data mining techniques to study one of the business fields to produce a business technique that helps in extraction the association rules for the incubated startup companies in Gaza. Moreover, we will study these association rules to understand and help the incubators in Gaza to avoid the failed ideas and teams as possible as it could be. Therefore, the incubators will be able to improve the incubation and entrepreneurship sector and increase the number of successful startup companies in Gaza and reduce the wasted fund and time on failed startups.
Towards a Successful Startup Company: Best Successful Team Components
doi:10.11648/j.ajtas.s.2015040101.12
American Journal of Theoretical and Applied Statistics
2014-12-27
© Science Publishing Group
Teejan T. El-Khazendar
Rifa J. El-Khozondar
Towards a Successful Startup Company: Best Successful Team Components
4
1
14
14
2014-12-27
2014-12-27
10.11648/j.ajtas.s.2015040101.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.s.2015040101.12
© Science Publishing Group
Classical Statistical Entropy of Black Hole
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.s.2015040101.13
The present article derives an expression for classical statistical entropy of black hole using Maxwell- Boltzmann statistics and shows that the classical statistical entropy is directly proportional to the area of event horizon of black hole leading the result as SbhαA(r). No primary and secondary data is used in this paper. We have designed the work similar to the work of Ren Zhao and Shuang-Qi Hu who obtained the quantum statistical entropy corresponding to the black hole horizon using Femi-Dirac & Bose-Einstein statistics. They have also shown that the entropy corresponding to the black hole horizon surface is the entropy of quantum state near the surface of the horizon. It is completely theoretical based work using Laptop done at Marwari College research laboratory and the residential research chamber of first author.
The present article derives an expression for classical statistical entropy of black hole using Maxwell- Boltzmann statistics and shows that the classical statistical entropy is directly proportional to the area of event horizon of black hole leading the result as SbhαA(r). No primary and secondary data is used in this paper. We have designed the work similar to the work of Ren Zhao and Shuang-Qi Hu who obtained the quantum statistical entropy corresponding to the black hole horizon using Femi-Dirac & Bose-Einstein statistics. They have also shown that the entropy corresponding to the black hole horizon surface is the entropy of quantum state near the surface of the horizon. It is completely theoretical based work using Laptop done at Marwari College research laboratory and the residential research chamber of first author.
Classical Statistical Entropy of Black Hole
doi:10.11648/j.ajtas.s.2015040101.13
American Journal of Theoretical and Applied Statistics
2015-02-05
© Science Publishing Group
Dipo Mahto
Ved Prakash
Krishna Murari Singh
Brajnandan Kumar
Classical Statistical Entropy of Black Hole
4
1
18
18
2015-02-05
2015-02-05
10.11648/j.ajtas.s.2015040101.13
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.s.2015040101.13
© Science Publishing Group
Modelling and Forecasting the Balance of Trade in Ethiopia
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.s.2015040101.14
For a long period of time, Ethiopia has involved in foreign trade and experienced trade deficit several time in the past. This deficit can be largely explained by the unequal terms of trade between agricultural commodities (the country's major export) and capital goods (the country’s major import). The core objective of study was to model the balance of trade in Ethiopia and forecast its value through ARIMA model by using annual data from 1974/75 to 2009/10. The appropriate model was ARIMA (3, 1, 0) and the forecasted value of balance of trade is expected to raising time to time from 2010/11 up to 2015/16.
For a long period of time, Ethiopia has involved in foreign trade and experienced trade deficit several time in the past. This deficit can be largely explained by the unequal terms of trade between agricultural commodities (the country's major export) and capital goods (the country’s major import). The core objective of study was to model the balance of trade in Ethiopia and forecast its value through ARIMA model by using annual data from 1974/75 to 2009/10. The appropriate model was ARIMA (3, 1, 0) and the forecasted value of balance of trade is expected to raising time to time from 2010/11 up to 2015/16.
Modelling and Forecasting the Balance of Trade in Ethiopia
doi:10.11648/j.ajtas.s.2015040101.14
American Journal of Theoretical and Applied Statistics
2015-03-20
© Science Publishing Group
Yibeltal Arega Ashebir
Tewodros Getinet Yirtaw
Anteneh Asmare Godana
Modelling and Forecasting the Balance of Trade in Ethiopia
4
1
23
23
2015-03-20
2015-03-20
10.11648/j.ajtas.s.2015040101.14
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.s.2015040101.14
© Science Publishing Group
Examination of the Factor Structure of Critical Thinking Disposition Scale According to Different Variables
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.s.2015040101.11
The information and technology age has brought rapid changes and transformations in the education system, and focused on raising qualified individuals who can choose, organize and use information; think on a critical and creative basis in the process; conduct research; solve problems; calculate possibilities and make deductions. Critical thinking, one of the most important characteristics one should possess, is a complicated and comprehensive process in which high-level skills are used. The development of critical thinking skills is a primary target of elementary curricula, which were reconstructed within the scope of the 2004 education reform. Such programs have introduced fundamental changes in educational activities upon the adoption of the constructivist education approach rather than the traditional approach. The present study aims to examine the factor structure of the Critical Thinking Disposition Scale (EMI) according to the genders and socio-economic statuses (SES) of different groups; and to determine whether the structure applies to different groups. A survey method is used in the research. The study population comprises a total of 39049 first-grade high school students from Ankara. The districts are categorized according to the low, medium and upper-SESs of the populations, and the research sample comprises 1264 first-grade high school students chosen via random and stratified sampling methods. Data was analyzed and reported in accordance with the quantitative analysis technique. In order to determine the three-factor structure of the scale according to the genders and socio-economic statuses of the groups, Multi Group Confirmatory Factor Analysis (MGCFA) was conducted. Model A was determined to be the basis, and three alternative models are constructed. Model D, in which error variances were released, was found to have better fit values. MGCFA was conducted in the groups categorized according to low, medium and upper SES. Model A, the basic model; as a result of the paired comparisons of the models, Model E-in which factor correlation and error variances were released-was determined to have better fit indices than the others. As a result of the research, the confirmatory model is determined not to be different in terms of the factor loads and inter-factors correlation in the groups categorized according to gender, but to differ in terms of error variances. In the groups categorized according to SES, the confirmatory model does not differ in terms of factor loads, but differs in terms of error variances and factor correlations.
The information and technology age has brought rapid changes and transformations in the education system, and focused on raising qualified individuals who can choose, organize and use information; think on a critical and creative basis in the process; conduct research; solve problems; calculate possibilities and make deductions. Critical thinking, one of the most important characteristics one should possess, is a complicated and comprehensive process in which high-level skills are used. The development of critical thinking skills is a primary target of elementary curricula, which were reconstructed within the scope of the 2004 education reform. Such programs have introduced fundamental changes in educational activities upon the adoption of the constructivist education approach rather than the traditional approach. The present study aims to examine the factor structure of the Critical Thinking Disposition Scale (EMI) according to the genders and socio-economic statuses (SES) of different groups; and to determine whether the structure applies to different groups. A survey method is used in the research. The study population comprises a total of 39049 first-grade high school students from Ankara. The districts are categorized according to the low, medium and upper-SESs of the populations, and the research sample comprises 1264 first-grade high school students chosen via random and stratified sampling methods. Data was analyzed and reported in accordance with the quantitative analysis technique. In order to determine the three-factor structure of the scale according to the genders and socio-economic statuses of the groups, Multi Group Confirmatory Factor Analysis (MGCFA) was conducted. Model A was determined to be the basis, and three alternative models are constructed. Model D, in which error variances were released, was found to have better fit values. MGCFA was conducted in the groups categorized according to low, medium and upper SES. Model A, the basic model; as a result of the paired comparisons of the models, Model E-in which factor correlation and error variances were released-was determined to have better fit indices than the others. As a result of the research, the confirmatory model is determined not to be different in terms of the factor loads and inter-factors correlation in the groups categorized according to gender, but to differ in terms of error variances. In the groups categorized according to SES, the confirmatory model does not differ in terms of factor loads, but differs in terms of error variances and factor correlations.
Examination of the Factor Structure of Critical Thinking Disposition Scale According to Different Variables
doi:10.11648/j.ajtas.s.2015040101.11
American Journal of Theoretical and Applied Statistics
2014-08-24
© Science Publishing Group
Ebru Demircioğlu
Sevilay Kilmen
Examination of the Factor Structure of Critical Thinking Disposition Scale According to Different Variables
4
1
8
8
2014-08-24
2014-08-24
10.11648/j.ajtas.s.2015040101.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.s.2015040101.11
© Science Publishing Group
The Statistical Distribution and Determinants of Mother’s Age at First Birth
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150402.11
The age at which child bearing begins, influences the number of children a woman bears throughout her reproductive period in the absence of any active fertility control. This study employed both parametric and non-parametric survival analysis techniques, with a cohort of women within the reproductive age (15-49 years), to determine the statistical distribution of the age at first birth of a woman from her time of birth and identify the significant prognostic factors determining the timing of first birth of Ghanaian women. Using data from the Ghana Demographic and Health Survey (GDHS), the study fitted several parametric Accelerated Failure Time models, from which the best parametric distribution for age at first birth was selected. The results revealed that, the average age at first birth was about 20 years, with more than 87.4% of the women having giving birth before they attained 25 years of age. The age at first birth among the Ghanaian women was best modeled by the log-logistic model. By this model, the age at which a woman had her first birth was determined, at the 10% significance level, by her Age at first marriage, her Educational level, her Wealth Status and whether or not the women practiced family planning before their first birth.
The age at which child bearing begins, influences the number of children a woman bears throughout her reproductive period in the absence of any active fertility control. This study employed both parametric and non-parametric survival analysis techniques, with a cohort of women within the reproductive age (15-49 years), to determine the statistical distribution of the age at first birth of a woman from her time of birth and identify the significant prognostic factors determining the timing of first birth of Ghanaian women. Using data from the Ghana Demographic and Health Survey (GDHS), the study fitted several parametric Accelerated Failure Time models, from which the best parametric distribution for age at first birth was selected. The results revealed that, the average age at first birth was about 20 years, with more than 87.4% of the women having giving birth before they attained 25 years of age. The age at first birth among the Ghanaian women was best modeled by the log-logistic model. By this model, the age at which a woman had her first birth was determined, at the 10% significance level, by her Age at first marriage, her Educational level, her Wealth Status and whether or not the women practiced family planning before their first birth.
The Statistical Distribution and Determinants of Mother’s Age at First Birth
doi:10.11648/j.ajtas.20150402.11
American Journal of Theoretical and Applied Statistics
2015-02-16
© Science Publishing Group
Logubayom Anuwoje Ida
Luguterah Albert
The Statistical Distribution and Determinants of Mother’s Age at First Birth
4
2
52
52
2015-02-16
2015-02-16
10.11648/j.ajtas.20150402.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150402.11
© Science Publishing Group
Optimal Replacement Age and Maintenance Cost: A Case Study
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150402.12
Maintenance plays a very vital role throughout an equipment/system’s planned life-cycle. Maintenance costs contribute a major portion of the life cycle costs of an equipment or system. This paper analyzes a set of failure data of a particular type of battery that used in automobile and/or trailer and proposes the optimal maintenance age at which the non-failed battery be maintained. A 2-fold Weibull mixture distribution is selected as the suitable distribution for the lifetime of the battery. The Expectation-Maximization (EM) algorithm is applied to estimate the parameters of the mixture distribution. The research will be of interest for effective maintenance management to maintenance engineers and managers working in battery manufacturing industry, as well as customers and service providers.
Maintenance plays a very vital role throughout an equipment/system’s planned life-cycle. Maintenance costs contribute a major portion of the life cycle costs of an equipment or system. This paper analyzes a set of failure data of a particular type of battery that used in automobile and/or trailer and proposes the optimal maintenance age at which the non-failed battery be maintained. A 2-fold Weibull mixture distribution is selected as the suitable distribution for the lifetime of the battery. The Expectation-Maximization (EM) algorithm is applied to estimate the parameters of the mixture distribution. The research will be of interest for effective maintenance management to maintenance engineers and managers working in battery manufacturing industry, as well as customers and service providers.
Optimal Replacement Age and Maintenance Cost: A Case Study
doi:10.11648/j.ajtas.20150402.12
American Journal of Theoretical and Applied Statistics
2015-03-14
© Science Publishing Group
Nayeema Sultana
Md. RezaulKarim
Optimal Replacement Age and Maintenance Cost: A Case Study
4
2
57
57
2015-03-14
2015-03-14
10.11648/j.ajtas.20150402.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150402.12
© Science Publishing Group
Investigating of Turkey Champion Clubs’ Financial Performance Using Bootstrap and Jackknife Methods
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150402.13
This study attempts to investigate using bootstrap and jackknife methods how accurate Turkey’s teams, being champion since super league was established, use their financial expenditure for the scores when they were consecutively in league between 2006-2007 and 2013-2014 seasons. In this context, the champion clubs’ financial performance values said eight seasons were obtained by dividing clubs’ total expenditure in each season to scored total points in current season. Also, the results obtained by bootstrap and jackknife methods were compared on the basis of error mean square. Thus, it was judged that bootstrap method gives better results than jackknife method. All statistics of bootstrap and jackknife methods were calculated by the aid of the R packaged software. Turkey’s clubs which had championship: Besiktas, Bursaspor, Fenerbahçe, Galatasaray and Trabzonspor.
This study attempts to investigate using bootstrap and jackknife methods how accurate Turkey’s teams, being champion since super league was established, use their financial expenditure for the scores when they were consecutively in league between 2006-2007 and 2013-2014 seasons. In this context, the champion clubs’ financial performance values said eight seasons were obtained by dividing clubs’ total expenditure in each season to scored total points in current season. Also, the results obtained by bootstrap and jackknife methods were compared on the basis of error mean square. Thus, it was judged that bootstrap method gives better results than jackknife method. All statistics of bootstrap and jackknife methods were calculated by the aid of the R packaged software. Turkey’s clubs which had championship: Besiktas, Bursaspor, Fenerbahçe, Galatasaray and Trabzonspor.
Investigating of Turkey Champion Clubs’ Financial Performance Using Bootstrap and Jackknife Methods
doi:10.11648/j.ajtas.20150402.13
American Journal of Theoretical and Applied Statistics
2015-03-19
© Science Publishing Group
Tolga Zaman
Kamil Alakus
Hasan Bulut
Investigating of Turkey Champion Clubs’ Financial Performance Using Bootstrap and Jackknife Methods
4
2
63
63
2015-03-19
2015-03-19
10.11648/j.ajtas.20150402.13
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150402.13
© Science Publishing Group
Analysis of Fertility Pattern Through Mathematical Curves
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150402.14
The age-specific fertility curves normalized by total fertility can be considered as the density of the age at childbearing distribution. Generally the shape of age specific fertility rate changes from convex to concave after it reaches its maximum value. Proportion bearing children before age 35 may be interpreted as tempo of fertility and rest may be interpreted as excess fertility, which is risky for mother as well as child both. Thus, the purpose of this study is to observe the pattern of fertility over time and space keeping the above idea into consideration. To experience the modest change in fertility, the estimated total fertility rate, are computed for the data through simple mathematical model. For this purpose the secondary data on age-specific fertility rate and its forward and backward cumulative distributions have been considered. Also the validity of proposed models has been checked through appropriate technique.
The age-specific fertility curves normalized by total fertility can be considered as the density of the age at childbearing distribution. Generally the shape of age specific fertility rate changes from convex to concave after it reaches its maximum value. Proportion bearing children before age 35 may be interpreted as tempo of fertility and rest may be interpreted as excess fertility, which is risky for mother as well as child both. Thus, the purpose of this study is to observe the pattern of fertility over time and space keeping the above idea into consideration. To experience the modest change in fertility, the estimated total fertility rate, are computed for the data through simple mathematical model. For this purpose the secondary data on age-specific fertility rate and its forward and backward cumulative distributions have been considered. Also the validity of proposed models has been checked through appropriate technique.
Analysis of Fertility Pattern Through Mathematical Curves
doi:10.11648/j.ajtas.20150402.14
American Journal of Theoretical and Applied Statistics
2015-03-21
© Science Publishing Group
Brijesh P. Singh
Kushagra Gupta
K. K. Singh
Analysis of Fertility Pattern Through Mathematical Curves
4
2
70
70
2015-03-21
2015-03-21
10.11648/j.ajtas.20150402.14
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150402.14
© Science Publishing Group
Performance Analysis of Powers of Skewness and Kurtosis Based Multivariate Normality Tests and Use of Extended Monte Carlo Simulation for Proposed Novelty Algorithm
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.s.2015040201.12
An ample study of the comparative powers of a number of omnibus multivariate normality tests is main object in this paper. Since testing for multivariate normality tests is considerably more challenging process than for testing of univariate one and therefore, study of testing for multivariate normality tests has its increasing demand. Through this paper, we have explored several techniques for assessing multivariate normality (MVN) and as well as comparative analysis for their competence have also been demonstrated. The results of extensive Monte Carlo simulation study of the size corrected power of various tests of multivariate normality for drawn samples from contaminated normal distributions have been explored as well. Moreover, a novel algorithm has been proposed in order to evaluate the size corrected powers for testing multivariate normality. The algorithm proposed herein is a fast easily implementable algorithm and it can be applied for both types of univariate and multivariate normality tests. Using Different omnibus tests for sample size 50 and 200, graphs for empirical powers of multivariate normal data with lower and upper contamination have been presented. Finally, some significant conclusions of our present study have been drawn.
An ample study of the comparative powers of a number of omnibus multivariate normality tests is main object in this paper. Since testing for multivariate normality tests is considerably more challenging process than for testing of univariate one and therefore, study of testing for multivariate normality tests has its increasing demand. Through this paper, we have explored several techniques for assessing multivariate normality (MVN) and as well as comparative analysis for their competence have also been demonstrated. The results of extensive Monte Carlo simulation study of the size corrected power of various tests of multivariate normality for drawn samples from contaminated normal distributions have been explored as well. Moreover, a novel algorithm has been proposed in order to evaluate the size corrected powers for testing multivariate normality. The algorithm proposed herein is a fast easily implementable algorithm and it can be applied for both types of univariate and multivariate normality tests. Using Different omnibus tests for sample size 50 and 200, graphs for empirical powers of multivariate normal data with lower and upper contamination have been presented. Finally, some significant conclusions of our present study have been drawn.
Performance Analysis of Powers of Skewness and Kurtosis Based Multivariate Normality Tests and Use of Extended Monte Carlo Simulation for Proposed Novelty Algorithm
doi:10.11648/j.ajtas.s.2015040201.12
American Journal of Theoretical and Applied Statistics
2015-03-11
© Science Publishing Group
Vishwa Nath Maurya
Ram Bilas Misra
Chandra K. Jaggi
Avadhesh Kumar Maurya
Performance Analysis of Powers of Skewness and Kurtosis Based Multivariate Normality Tests and Use of Extended Monte Carlo Simulation for Proposed Novelty Algorithm
4
2
18
18
2015-03-11
2015-03-11
10.11648/j.ajtas.s.2015040201.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.s.2015040201.12
© Science Publishing Group
Mathematical Modelling and Steady State Performance Analysis of a Markovian Queue with Heterogeneous Servers and Working Vacation
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.s.2015040201.11
In the present paper, mathematical modeling for analyzing a Markovian queueing system with two heterogeneous servers and working vacation has been demonstrated. Keeping in view queueing situations in real life problems, here we consider service policy that initially both the heterogeneous servers take vacation when there are no customers waiting for service in the queue; however, after this server 1 is always available but the other goes on vacation whenever server 2 is idle. The vacationing server however, returns to serve at a low rate as an arrival finds the other server busy. Busy period analysis for the working vacation model with heterogeneous servers has been worked out. Performance measures of the Markovian queueing system with varying parameters have been explored under steady state using matrix geometric method. Finally, based on sensitivity analysis of the performance measures, conclusive observations have been focused.
In the present paper, mathematical modeling for analyzing a Markovian queueing system with two heterogeneous servers and working vacation has been demonstrated. Keeping in view queueing situations in real life problems, here we consider service policy that initially both the heterogeneous servers take vacation when there are no customers waiting for service in the queue; however, after this server 1 is always available but the other goes on vacation whenever server 2 is idle. The vacationing server however, returns to serve at a low rate as an arrival finds the other server busy. Busy period analysis for the working vacation model with heterogeneous servers has been worked out. Performance measures of the Markovian queueing system with varying parameters have been explored under steady state using matrix geometric method. Finally, based on sensitivity analysis of the performance measures, conclusive observations have been focused.
Mathematical Modelling and Steady State Performance Analysis of a Markovian Queue with Heterogeneous Servers and Working Vacation
doi:10.11648/j.ajtas.s.2015040201.11
American Journal of Theoretical and Applied Statistics
2015-03-11
© Science Publishing Group
Vishwa Nath Maurya
Mathematical Modelling and Steady State Performance Analysis of a Markovian Queue with Heterogeneous Servers and Working Vacation
4
2
10
10
2015-03-11
2015-03-11
10.11648/j.ajtas.s.2015040201.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.s.2015040201.11
© Science Publishing Group
Empirical Analysis of Work Life Balance polIcies and Its Impact on Employee’s Job Satisfaction and Performance: Descriptive Statistical Approach
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.s.2015040201.15
The present paper envisages analyzing the relationship between work life balance policies and employee job satisfaction. Specifically, it has been focused here that the work life balance policies lead to attaining equilibrium between professional work and other activities. Under the present study is also examined that the work life balance policies reduce friction between official and domestic life. Related theories of employee’s job satisfaction proposed by previous researchers are reviewed and summarized herein in order to use and correlate these to proposed empirical analysis. The quality of work life policies is increasingly becoming part of the business strategy and the focus is on the potential of these policies to influence employee’s quality of working life and more importantly to help them maintain work-life balance with equal attention on performance, commitment at work and job satisfaction. This study proves to be a milestone for the researchers, policy makers, management professionals, statisticians and students to properly understand the concepts of employee’s job satisfaction, work life balance and their relationship. The present empirical study involves descriptive statistical approach. The target population was two hundred and forty respondents. Statistical primary data was collected using questionnaires, and analyzed using statistical package for management and social sciences. The findings of this study emphasized that each of the work life balance policies on its own is a predictor of job satisfaction. The goodness of fit, R = 0.618 showed a good strength of the relationships between independent variables and the dependent variable. The result explored herein makes the recommendation that managers in banks should improve the work life balance policies offered to employees in order to increase their job satisfaction, to improve staff commitment and productivity.
The present paper envisages analyzing the relationship between work life balance policies and employee job satisfaction. Specifically, it has been focused here that the work life balance policies lead to attaining equilibrium between professional work and other activities. Under the present study is also examined that the work life balance policies reduce friction between official and domestic life. Related theories of employee’s job satisfaction proposed by previous researchers are reviewed and summarized herein in order to use and correlate these to proposed empirical analysis. The quality of work life policies is increasingly becoming part of the business strategy and the focus is on the potential of these policies to influence employee’s quality of working life and more importantly to help them maintain work-life balance with equal attention on performance, commitment at work and job satisfaction. This study proves to be a milestone for the researchers, policy makers, management professionals, statisticians and students to properly understand the concepts of employee’s job satisfaction, work life balance and their relationship. The present empirical study involves descriptive statistical approach. The target population was two hundred and forty respondents. Statistical primary data was collected using questionnaires, and analyzed using statistical package for management and social sciences. The findings of this study emphasized that each of the work life balance policies on its own is a predictor of job satisfaction. The goodness of fit, R = 0.618 showed a good strength of the relationships between independent variables and the dependent variable. The result explored herein makes the recommendation that managers in banks should improve the work life balance policies offered to employees in order to increase their job satisfaction, to improve staff commitment and productivity.
Empirical Analysis of Work Life Balance polIcies and Its Impact on Employee’s Job Satisfaction and Performance: Descriptive Statistical Approach
doi:10.11648/j.ajtas.s.2015040201.15
American Journal of Theoretical and Applied Statistics
2015-03-11
© Science Publishing Group
Vishwa Nath Maurya
Chandra K. Jaggi
Bijay Singh
Charanjeet Singh Arneja
Avadhesh Kumar Maurya
Diwinder Kaur Arora
Empirical Analysis of Work Life Balance polIcies and Its Impact on Employee’s Job Satisfaction and Performance: Descriptive Statistical Approach
4
2
43
43
2015-03-11
2015-03-11
10.11648/j.ajtas.s.2015040201.15
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.s.2015040201.15
© Science Publishing Group
Design and Estimate of the Optimal Parameters of Adaptive Control Chart Model Using Markov Chains Technique
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.s.2015040201.13
Present paper aims to plan and estimate the optimal parameters of an adaptive control chart model for monitoring the mean of a process using sample size and variable interval. Here, the X_BARRA-VSSI chart has been chosen because of its two special features- firstly being an adaptive scheme with great potential for practical application, and secondly the chart only requires knowledge of the sample size and the time between sample selections after established the optimal parameters for the chart. To estimate the optimal parameters of chosen control chart model, the Markov chains technique has been applied. Two functions written in R language are presented in order to assist the user in planning a statistical project based on the X_BARRA-VSSI adaptive scheme. Evaluating the effectiveness of the control chart of by means of Markov chains has been examined and the optimal parameters of the adaptive control chart model have been explored. In addition to this, a numerical example for application of the control chart model has also been illustrated and finally some conclusive observations with significant suggestions for its future scope are carried out.
Present paper aims to plan and estimate the optimal parameters of an adaptive control chart model for monitoring the mean of a process using sample size and variable interval. Here, the X_BARRA-VSSI chart has been chosen because of its two special features- firstly being an adaptive scheme with great potential for practical application, and secondly the chart only requires knowledge of the sample size and the time between sample selections after established the optimal parameters for the chart. To estimate the optimal parameters of chosen control chart model, the Markov chains technique has been applied. Two functions written in R language are presented in order to assist the user in planning a statistical project based on the X_BARRA-VSSI adaptive scheme. Evaluating the effectiveness of the control chart of by means of Markov chains has been examined and the optimal parameters of the adaptive control chart model have been explored. In addition to this, a numerical example for application of the control chart model has also been illustrated and finally some conclusive observations with significant suggestions for its future scope are carried out.
Design and Estimate of the Optimal Parameters of Adaptive Control Chart Model Using Markov Chains Technique
doi:10.11648/j.ajtas.s.2015040201.13
American Journal of Theoretical and Applied Statistics
2015-03-11
© Science Publishing Group
Vishwa Nath Maurya
Ram Bilas Misra
Chandra K. Jaggi
Charanjeet Singh Arneja
Rama Shanker Sharma
Avadhesh Kumar Maurya
Design and Estimate of the Optimal Parameters of Adaptive Control Chart Model Using Markov Chains Technique
4
2
26
26
2015-03-11
2015-03-11
10.11648/j.ajtas.s.2015040201.13
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.s.2015040201.13
© Science Publishing Group
Correlation Analysis between the Corporate Governance and Financial Performance of Banking Sectors Using Parameter Estimation
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.s.2015040201.14
Present paper deals with problems of determining the relationship between the variables of corporate governance and financial performance of Islamic banks, where we dealt the corporate governance in the banking sector, where increasing the importance of corporate governance, due to their special nature, as the bankruptcy of banks affects not only the relevant parties from customers, depositors and lenders, but affect financial stability and then the economy as a whole. Through this paper we dealt to the specificity of governance in Islamic banks, which face Double Governance: Anglo-Saxon governance system and Islamic Governance System. In addition, we focused our attention to measure the impact of corporate governance variables on financial performance through an empirical study on a sample of Islamic banks during the period 2005-2012 in the GCC region. Our present study implies that there is a very strong relationship between the variables of governance and financial performance of Islamic banks, where there is a positive relationship between return on assets and the composition of the Board of Directors, the size of the Board of Directors, the number of committees in the Council, as well as the number of members of the Sharia Supervisory Board, while it is clear that there is a negative relationship between return on assets and concentration ownership variable.
Present paper deals with problems of determining the relationship between the variables of corporate governance and financial performance of Islamic banks, where we dealt the corporate governance in the banking sector, where increasing the importance of corporate governance, due to their special nature, as the bankruptcy of banks affects not only the relevant parties from customers, depositors and lenders, but affect financial stability and then the economy as a whole. Through this paper we dealt to the specificity of governance in Islamic banks, which face Double Governance: Anglo-Saxon governance system and Islamic Governance System. In addition, we focused our attention to measure the impact of corporate governance variables on financial performance through an empirical study on a sample of Islamic banks during the period 2005-2012 in the GCC region. Our present study implies that there is a very strong relationship between the variables of governance and financial performance of Islamic banks, where there is a positive relationship between return on assets and the composition of the Board of Directors, the size of the Board of Directors, the number of committees in the Council, as well as the number of members of the Sharia Supervisory Board, while it is clear that there is a negative relationship between return on assets and concentration ownership variable.
Correlation Analysis between the Corporate Governance and Financial Performance of Banking Sectors Using Parameter Estimation
doi:10.11648/j.ajtas.s.2015040201.14
American Journal of Theoretical and Applied Statistics
2015-03-11
© Science Publishing Group
Vishwa Nath Maurya
Rama Shanker Sharma
Saad Talib Hasson Aljebori
Avadhesh Kumar Maurya
Diwinder Kaur Arora
Correlation Analysis between the Corporate Governance and Financial Performance of Banking Sectors Using Parameter Estimation
4
2
32
32
2015-03-11
2015-03-11
10.11648/j.ajtas.s.2015040201.14
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.s.2015040201.14
© Science Publishing Group
Application of Discriminant Analysis on Broncho-Pulmonary Dysplasia among Infants: A Case Study of UMTH and UDUS Hospitals in Maidguri, Nigeria
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.s.2015040201.16
Present research paper envisages investigation of the incidence and prevalence of Broncho-Pulmonary Dysplasia among infants in UMTH and UDUS Hospitals in Maidguri, Nigeria. The data used in this research paper were obtained from the University of Maiduguri Teaching Hospital (UMTH), Maiduguri and Usmanu Danfodiyo University Teaching Hospital Sokoto with sample size of Seventy (70) patients in 2014; Fifty (50) patients from Maiduguri and Twenty (20) from Sokoto respectively. Discriminant analysis model was employed for the analysis with the help of SPSS. The result of the analysis indicates that discriminant model has a perfect classification of new cases in Maiduguri while it has misclassified one of five new cases in Sokoto. This result suggests that the prediction of Broncho–Pulmonary Dysplasia (BPD) is better done with discriminant model in Maiduguri. The study recommends that Doctors and Clinics should adopt the use of the models built by this research to detect the prevalence of BPD among infants.
Present research paper envisages investigation of the incidence and prevalence of Broncho-Pulmonary Dysplasia among infants in UMTH and UDUS Hospitals in Maidguri, Nigeria. The data used in this research paper were obtained from the University of Maiduguri Teaching Hospital (UMTH), Maiduguri and Usmanu Danfodiyo University Teaching Hospital Sokoto with sample size of Seventy (70) patients in 2014; Fifty (50) patients from Maiduguri and Twenty (20) from Sokoto respectively. Discriminant analysis model was employed for the analysis with the help of SPSS. The result of the analysis indicates that discriminant model has a perfect classification of new cases in Maiduguri while it has misclassified one of five new cases in Sokoto. This result suggests that the prediction of Broncho–Pulmonary Dysplasia (BPD) is better done with discriminant model in Maiduguri. The study recommends that Doctors and Clinics should adopt the use of the models built by this research to detect the prevalence of BPD among infants.
Application of Discriminant Analysis on Broncho-Pulmonary Dysplasia among Infants: A Case Study of UMTH and UDUS Hospitals in Maidguri, Nigeria
doi:10.11648/j.ajtas.s.2015040201.16
American Journal of Theoretical and Applied Statistics
2015-03-21
© Science Publishing Group
Vishwa Nath Maurya
Madaki Umar Yusuf
Vijay Vir Singh
Babagana Modu
Application of Discriminant Analysis on Broncho-Pulmonary Dysplasia among Infants: A Case Study of UMTH and UDUS Hospitals in Maidguri, Nigeria
4
2
51
51
2015-03-21
2015-03-21
10.11648/j.ajtas.s.2015040201.16
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.s.2015040201.16
© Science Publishing Group
Cross-Country Spillovers in East Africa: A Global Vector Autoregressive Analysis
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150403.18
The recent financial crisis raises important issues about transmission of financial shocks across borders. This paper uses the global vector autoregressive model as developed in Dees, di Mauro, Pesaran and Smith (2007) to study cross-country interlinkages among East African countries. The paper uses trade weights to capture the importance of the foreign variables. Results reveal that there is no evidence of strong international linkages across countries in East Africa. Results also reveal that the variable in which a shock is simulated is the main channel through which-in the shortrun-shocks are transmitted, while the contribution of other variables becomes more important over longer horizons.
The recent financial crisis raises important issues about transmission of financial shocks across borders. This paper uses the global vector autoregressive model as developed in Dees, di Mauro, Pesaran and Smith (2007) to study cross-country interlinkages among East African countries. The paper uses trade weights to capture the importance of the foreign variables. Results reveal that there is no evidence of strong international linkages across countries in East Africa. Results also reveal that the variable in which a shock is simulated is the main channel through which-in the shortrun-shocks are transmitted, while the contribution of other variables becomes more important over longer horizons.
Cross-Country Spillovers in East Africa: A Global Vector Autoregressive Analysis
doi:10.11648/j.ajtas.20150403.18
American Journal of Theoretical and Applied Statistics
2015-04-22
© Science Publishing Group
Daniel Njoora
Olusanya E. Olubusoye
Patrick Weke
Cross-Country Spillovers in East Africa: A Global Vector Autoregressive Analysis
4
3
137
137
2015-04-22
2015-04-22
10.11648/j.ajtas.20150403.18
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150403.18
© Science Publishing Group
Statistical Trend Analysis of Residential Water Demand in Kisumu City, Kenya
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150403.16
This study sought to analyse trend in the monthly water demand data series in Kisumu city at both seasonal and non-seasonal levels using the parametric method of Ordinary Least Squares (OLS) and non-parametric methods of Mann-Kendall tau and Sen's T test. Sen’s test was applied to validate the Mann Kendall trend test and to estimate the magnitude of the trend and its direction. The significance of the slope of the OLS equation was tested using the F-Test based on the Analysis of Variance (ANOVA). Secondary monthly water consumption data obtained from KIWASCO for the period January 2004 to December 2013 were used. Using logarithmically transformed data, the study established by OLS that residential water demand in Kisumu City had a significant increasing trend (FCalc=(105.13) > F(1;119)(α=0:05)=(5.15)). Kendall's tau test corroborated the OLS results of a significant increasing trend. The Sens T test indicated that most of the months registered significant upward trend with Sen’s slope estimates showing positive rates of change in residential water demand for these months.
This study sought to analyse trend in the monthly water demand data series in Kisumu city at both seasonal and non-seasonal levels using the parametric method of Ordinary Least Squares (OLS) and non-parametric methods of Mann-Kendall tau and Sen's T test. Sen’s test was applied to validate the Mann Kendall trend test and to estimate the magnitude of the trend and its direction. The significance of the slope of the OLS equation was tested using the F-Test based on the Analysis of Variance (ANOVA). Secondary monthly water consumption data obtained from KIWASCO for the period January 2004 to December 2013 were used. Using logarithmically transformed data, the study established by OLS that residential water demand in Kisumu City had a significant increasing trend (FCalc=(105.13) > F(1;119)(α=0:05)=(5.15)). Kendall's tau test corroborated the OLS results of a significant increasing trend. The Sens T test indicated that most of the months registered significant upward trend with Sen’s slope estimates showing positive rates of change in residential water demand for these months.
Statistical Trend Analysis of Residential Water Demand in Kisumu City, Kenya
doi:10.11648/j.ajtas.20150403.16
American Journal of Theoretical and Applied Statistics
2015-04-20
© Science Publishing Group
Robert Nyamao Nyabwanga
Edgar Ouko Otumba
Fredrick Onyango
Simeyo Otieno
Statistical Trend Analysis of Residential Water Demand in Kisumu City, Kenya
4
3
117
117
2015-04-20
2015-04-20
10.11648/j.ajtas.20150403.16
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150403.16
© Science Publishing Group
Analysis of Tobacco Smoking Patterns in Kenya Using the Multinomial Logit Model
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150403.14
Objectives: The study aimed to determine the tobacco smoking patterns in Kenya. Methods: This research project used the Kenya GATS 2014 data, in which a sample of 5436 total people was interviewed. However since the research focussed on modelling tobacco smoking pattern in Kenya, data from only 4418 people was used for the analysis. Data from 1018 people in the sample was dropped because information about the individuals smoking pattern, age or work status could not be found. Data Analysis: The data was analysed using R-software version 3.0.2, and report presented in form of tables and graphs. Results: This project found out that there is likelihood of a person being a heavy smoker, light smoker or Non-smoker, if the person works in the Government and Non-government /private organization, self-employed or Unemployed. The overall effect of work status was statistically significant with a chi-square value of 129.722 (p-value<0.0001). Conclusion: The results show that a person’s working status and their age are good predictors of a specific smoking pattern. From the results we have more people smoking as they grow old.
Objectives: The study aimed to determine the tobacco smoking patterns in Kenya. Methods: This research project used the Kenya GATS 2014 data, in which a sample of 5436 total people was interviewed. However since the research focussed on modelling tobacco smoking pattern in Kenya, data from only 4418 people was used for the analysis. Data from 1018 people in the sample was dropped because information about the individuals smoking pattern, age or work status could not be found. Data Analysis: The data was analysed using R-software version 3.0.2, and report presented in form of tables and graphs. Results: This project found out that there is likelihood of a person being a heavy smoker, light smoker or Non-smoker, if the person works in the Government and Non-government /private organization, self-employed or Unemployed. The overall effect of work status was statistically significant with a chi-square value of 129.722 (p-value<0.0001). Conclusion: The results show that a person’s working status and their age are good predictors of a specific smoking pattern. From the results we have more people smoking as they grow old.
Analysis of Tobacco Smoking Patterns in Kenya Using the Multinomial Logit Model
doi:10.11648/j.ajtas.20150403.14
American Journal of Theoretical and Applied Statistics
2015-04-03
© Science Publishing Group
Samwel N. Mwenda
Anthony Kibira Wanjoya
Anthony Gichuhi Waititu
Analysis of Tobacco Smoking Patterns in Kenya Using the Multinomial Logit Model
4
3
98
98
2015-04-03
2015-04-03
10.11648/j.ajtas.20150403.14
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150403.14
© Science Publishing Group
Application of a Bivariate Poisson Model in Devising a Profitable Betting Strategy of the Zimbabwe Premier Soccer League Match Results
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150403.15
The study seeks to construct a profitable betting strategy for soccer results by developing a bivariate Poisson model for the analysis and computation of probabilities for football match outcomes. The dependence coefficient is estimated from Monte Carlo simulation and the scoring intensities are estimated from a log-linear model. The hypothesis tests show that the home-ground effect exists for some, but not all teams in the Zimbabwe Premier Soccer League. The profitable betting rule is to place a bet on the outcome of a particular match when a model's probabilistic forecast suggests a sufficient edge over the bookmaker's implied probability.
The study seeks to construct a profitable betting strategy for soccer results by developing a bivariate Poisson model for the analysis and computation of probabilities for football match outcomes. The dependence coefficient is estimated from Monte Carlo simulation and the scoring intensities are estimated from a log-linear model. The hypothesis tests show that the home-ground effect exists for some, but not all teams in the Zimbabwe Premier Soccer League. The profitable betting rule is to place a bet on the outcome of a particular match when a model's probabilistic forecast suggests a sufficient edge over the bookmaker's implied probability.
Application of a Bivariate Poisson Model in Devising a Profitable Betting Strategy of the Zimbabwe Premier Soccer League Match Results
doi:10.11648/j.ajtas.20150403.15
American Journal of Theoretical and Applied Statistics
2015-04-07
© Science Publishing Group
Desmond Mwembe
Lizwe Sibanda
Ndava Constantine Mupondo
Application of a Bivariate Poisson Model in Devising a Profitable Betting Strategy of the Zimbabwe Premier Soccer League Match Results
4
3
111
111
2015-04-07
2015-04-07
10.11648/j.ajtas.20150403.15
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150403.15
© Science Publishing Group
Analysis of Mean Absolute Deviation for Randomized Block Design under Laplace Distribution
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150403.19
Analysis of mean absolute deviation (ANOMAD) for randomized block design is derived where the total sum of absolute deviation (TSA) is partition into exact block sum of absolute deviation (BLSA), exact treatment sum of absolute deviation (TRSA) and within sum of absolute deviation (WSA). The exact partitions are derived by getting rid of the absolute function from MAD by using the idea of re-expressing the mean absolute deviation as a weighted average of data with sum of weights zero. ANOMAD has advantages: offers meaningful measure of dispersion, does not square data, and can be extended to other location measures such as median. Two ANOMAD graphs are proposed. However, the variance-gamma distribution is used to fit the sampling distributions for the mean of BLSA and the mean of TRSA. Consequently, two tests of equal means and medians are proposed under the assumption of Laplace distribution.
Analysis of mean absolute deviation (ANOMAD) for randomized block design is derived where the total sum of absolute deviation (TSA) is partition into exact block sum of absolute deviation (BLSA), exact treatment sum of absolute deviation (TRSA) and within sum of absolute deviation (WSA). The exact partitions are derived by getting rid of the absolute function from MAD by using the idea of re-expressing the mean absolute deviation as a weighted average of data with sum of weights zero. ANOMAD has advantages: offers meaningful measure of dispersion, does not square data, and can be extended to other location measures such as median. Two ANOMAD graphs are proposed. However, the variance-gamma distribution is used to fit the sampling distributions for the mean of BLSA and the mean of TRSA. Consequently, two tests of equal means and medians are proposed under the assumption of Laplace distribution.
Analysis of Mean Absolute Deviation for Randomized Block Design under Laplace Distribution
doi:10.11648/j.ajtas.20150403.19
American Journal of Theoretical and Applied Statistics
2015-04-22
© Science Publishing Group
Elsayed A. H. Elamir
Analysis of Mean Absolute Deviation for Randomized Block Design under Laplace Distribution
4
3
149
149
2015-04-22
2015-04-22
10.11648/j.ajtas.20150403.19
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150403.19
© Science Publishing Group
Entropy for Past Residual Life Time Distributions
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150403.17
As we are familiar that existence of life is uncertain. In the context of reliability and lifetime distributions, there are some measures such as the hazard rate function or the mean residual lifetime function that have been used to characterize or compare the aging process of a component. This definition deals with random variable truncated above some t, i.e. the support of the random variable is taken to be (0, t). We outline some common methods for past residual lifetime distributions with the aim to provide some insights on general construction mechanisms. Some applications are given to provide the readers a possible source of ideas to draw upon. Applications of past residual lifetime distributions in reliability, survival analysis and mortality studies are briefly discussed.
As we are familiar that existence of life is uncertain. In the context of reliability and lifetime distributions, there are some measures such as the hazard rate function or the mean residual lifetime function that have been used to characterize or compare the aging process of a component. This definition deals with random variable truncated above some t, i.e. the support of the random variable is taken to be (0, t). We outline some common methods for past residual lifetime distributions with the aim to provide some insights on general construction mechanisms. Some applications are given to provide the readers a possible source of ideas to draw upon. Applications of past residual lifetime distributions in reliability, survival analysis and mortality studies are briefly discussed.
Entropy for Past Residual Life Time Distributions
doi:10.11648/j.ajtas.20150403.17
American Journal of Theoretical and Applied Statistics
2015-04-30
© Science Publishing Group
Arif Habib
Meshiel Alalyani
Entropy for Past Residual Life Time Distributions
4
3
124
124
2015-04-30
2015-04-30
10.11648/j.ajtas.20150403.17
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150403.17
© Science Publishing Group
Application of Logistic Regression in Determining the Factors Influencing the use of Modern Contraceptive among married women in Ethiopia
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150403.21
The aim of the study was to investigate the determinants of use of modern contraceptive among married women in Ethiopia. Our study is based on the data taken from a nationally representative survey EDHS of 2011. The sample includes 9,438 married women aged 15-49 years. Cross tabulations were carried out at the bivariate level to assess the association between contraceptive use and each of the explanatory variables and binary multiple logistic regression analysis was used to identify the factors influencing modem contraceptive use among married women in Ethiopia. The bivariate analysis reveals statistically significant association between all explanatory variables i.e age of woman, region, religion, place of residence, education level of woman, number of living children, desire for more children, wealth status, and decision maker for modern contraception, educational level of husband, modern contraceptive knowledge and exposure to media. Results for binary multiple logistic regression analysis reveals that age of woman have a statistically significant positive effect on modern contraceptive use. Contraceptive use was highest in the age group of 15 to 19 years while it was lowest among married women aged 40-44 years compared to those married women aged 45-49 years as reference category. Furthermore, uneducated women and women not at work want no more children. The lowest wealth status women are less likely to use modern contraception compared to their corresponding reference group. The result also shows that married women who do not discuss about family planning with their husbands use modern contraception 25.6% less in comparison to those couples made decisions jointly. Generally men play a critical role in determining the size of their family. Male involvement, therefore, is an integral component of successful reproductive health programs. But binary logistic regression results do not support the hypothesis that educational levels of husband have influence on the use of modem contraceptive methods among women. Media exposure is another factor that influences modem contraceptive use. The odds of married women who were not exposed to media are 35.8 % less likely in using a modern contraception method than those who had media exposure.
The aim of the study was to investigate the determinants of use of modern contraceptive among married women in Ethiopia. Our study is based on the data taken from a nationally representative survey EDHS of 2011. The sample includes 9,438 married women aged 15-49 years. Cross tabulations were carried out at the bivariate level to assess the association between contraceptive use and each of the explanatory variables and binary multiple logistic regression analysis was used to identify the factors influencing modem contraceptive use among married women in Ethiopia. The bivariate analysis reveals statistically significant association between all explanatory variables i.e age of woman, region, religion, place of residence, education level of woman, number of living children, desire for more children, wealth status, and decision maker for modern contraception, educational level of husband, modern contraceptive knowledge and exposure to media. Results for binary multiple logistic regression analysis reveals that age of woman have a statistically significant positive effect on modern contraceptive use. Contraceptive use was highest in the age group of 15 to 19 years while it was lowest among married women aged 40-44 years compared to those married women aged 45-49 years as reference category. Furthermore, uneducated women and women not at work want no more children. The lowest wealth status women are less likely to use modern contraception compared to their corresponding reference group. The result also shows that married women who do not discuss about family planning with their husbands use modern contraception 25.6% less in comparison to those couples made decisions jointly. Generally men play a critical role in determining the size of their family. Male involvement, therefore, is an integral component of successful reproductive health programs. But binary logistic regression results do not support the hypothesis that educational levels of husband have influence on the use of modem contraceptive methods among women. Media exposure is another factor that influences modem contraceptive use. The odds of married women who were not exposed to media are 35.8 % less likely in using a modern contraception method than those who had media exposure.
Application of Logistic Regression in Determining the Factors Influencing the use of Modern Contraceptive among married women in Ethiopia
doi:10.11648/j.ajtas.20150403.21
American Journal of Theoretical and Applied Statistics
2015-05-05
© Science Publishing Group
Kebede Abu Aragaw
Application of Logistic Regression in Determining the Factors Influencing the use of Modern Contraceptive among married women in Ethiopia
4
3
162
162
2015-05-05
2015-05-05
10.11648/j.ajtas.20150403.21
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150403.21
© Science Publishing Group
An Alternative Method of Estimation of SUR Model
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150403.20
This paper proposed a transformed method of SUR model which provided unbiased estimation in case of two and three equations of high and low co-linearity for both small and large datasets. The generalized least squares (GLS) method for estimation of seemingly unrelated regression (SUR) model proposed by Zellner (1962), Srivastava and Giles (1987),provided higher MSE. Although the Ridge estimators proposed by Alkhamisi and Shukur (2008) provided smaller MSE in comparison with others, it was not unbiased in case of severe multicollinearity.This study showed that our proposed method typically provided unbiasedestimator with lower MSE and TMSE than traditional methods.
This paper proposed a transformed method of SUR model which provided unbiased estimation in case of two and three equations of high and low co-linearity for both small and large datasets. The generalized least squares (GLS) method for estimation of seemingly unrelated regression (SUR) model proposed by Zellner (1962), Srivastava and Giles (1987),provided higher MSE. Although the Ridge estimators proposed by Alkhamisi and Shukur (2008) provided smaller MSE in comparison with others, it was not unbiased in case of severe multicollinearity.This study showed that our proposed method typically provided unbiasedestimator with lower MSE and TMSE than traditional methods.
An Alternative Method of Estimation of SUR Model
doi:10.11648/j.ajtas.20150403.20
American Journal of Theoretical and Applied Statistics
2015-04-24
© Science Publishing Group
Shohel Rana
Mohammad Mastak Al Amin
An Alternative Method of Estimation of SUR Model
4
3
155
155
2015-04-24
2015-04-24
10.11648/j.ajtas.20150403.20
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150403.20
© Science Publishing Group
Mathematical Modeling of Optimum 3 Step Stress Accelerated Life Testing for Generalized Pareto Distribution
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150403.22
This article contains the optimum 3 step stress accelerated life test under cumulative exposure model. The lifetimes of test units are assumed to follow a generalized Pareto distribution. The scale parameter of the used failure time distribution at the constant stress level is assumed to have a log-linear and quadratic relationship with the stress. A comparison between linear plan and quadratic plan by maximum likelihood estimators for the different sample sizes is shown in the table. The optimum test plans is obtained by minimizing the asymptotic variance of the maximum likelihood estimator of the percentile of the lifetime distribution at normal stress condition for the model parameters. Tables of optimum times of changing stress level for both plans are also obtained.
This article contains the optimum 3 step stress accelerated life test under cumulative exposure model. The lifetimes of test units are assumed to follow a generalized Pareto distribution. The scale parameter of the used failure time distribution at the constant stress level is assumed to have a log-linear and quadratic relationship with the stress. A comparison between linear plan and quadratic plan by maximum likelihood estimators for the different sample sizes is shown in the table. The optimum test plans is obtained by minimizing the asymptotic variance of the maximum likelihood estimator of the percentile of the lifetime distribution at normal stress condition for the model parameters. Tables of optimum times of changing stress level for both plans are also obtained.
Mathematical Modeling of Optimum 3 Step Stress Accelerated Life Testing for Generalized Pareto Distribution
doi:10.11648/j.ajtas.20150403.22
American Journal of Theoretical and Applied Statistics
2015-05-12
© Science Publishing Group
Sadia Anwar
Sana Shahab
Arif Ul Islam
Mathematical Modeling of Optimum 3 Step Stress Accelerated Life Testing for Generalized Pareto Distribution
4
3
169
169
2015-05-12
2015-05-12
10.11648/j.ajtas.20150403.22
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150403.22
© Science Publishing Group
Multinomial Logit Modeling of Factors Associated With Multiple Sexual Partners from the Kenya Aids Indicator Survey 2007
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150403.23
The number of lifetime sex partners of an individual has an important effect on Human Immunodeficiency Virus (HIV) status of an individual; hence modeling multiple sexual partnerships is an essential component of any analysis of HIV outcome. Multiple sexual partnerships are associated with greater risk of HIV, Sexually Transmitted infections (STIs) and intimate partner violence. This research project presents a general approach for modeling logit of clustered (correlated) ordinal and nominal responses using polytomous data from the Kenya AIDS Indicator Survey 2007 (NASCOP 2010). We review multinomial logit models as generalized linear models. The model is applied to HIV prevalence data among men and women in Kenya, derived from the Kenya AIDS Indicator Survey 2007 (KAIS). We generalize logistic regression to handle multinomial response variables, with separate models for nominal and ordinal cases. When modeling a nominal response variable we are interested in finding if certain predictors have an effect on the probabilities. The baseline category logit model, models the odds of being in one category relative to being in a designated category (last category), for all pairs of categories. It is used for nominal responses. A maximum likelihood estimation (MLE) approach is used for baseline category logit model. To model an ordinal response variable one models the cumulative response probabilities or cumulative odds. The cumulative logit model is used when the response of an individual unit is restricted to one of a finite number of ordinal values. This study shows the practicality of multinomial logit model in analyzing epidemiological data. Other studies have found education to be associated with multiple sexual partners. In this study, we observed that multiple sexual partners is not related to education. Other covariates like Gender, Place of residence, sexually active individuals for the past 12 months and marital status were found to be associated with multiple sexual partners. Individuals that are sexually active for the past 12 months were found to be ten times more likely to have multiple sexual partners compared to those that are not. After controlling for all other factors, the odds of male to female having multiple sexual partners doubled to 7.6 meaning male are almost 8 times likely to have multiple sexual partners compared to female. Partner testing or couples testing is a main strategy of national testing initiatives in Kenya. Respondents are encouraged to learn their test results with their partner.
The number of lifetime sex partners of an individual has an important effect on Human Immunodeficiency Virus (HIV) status of an individual; hence modeling multiple sexual partnerships is an essential component of any analysis of HIV outcome. Multiple sexual partnerships are associated with greater risk of HIV, Sexually Transmitted infections (STIs) and intimate partner violence. This research project presents a general approach for modeling logit of clustered (correlated) ordinal and nominal responses using polytomous data from the Kenya AIDS Indicator Survey 2007 (NASCOP 2010). We review multinomial logit models as generalized linear models. The model is applied to HIV prevalence data among men and women in Kenya, derived from the Kenya AIDS Indicator Survey 2007 (KAIS). We generalize logistic regression to handle multinomial response variables, with separate models for nominal and ordinal cases. When modeling a nominal response variable we are interested in finding if certain predictors have an effect on the probabilities. The baseline category logit model, models the odds of being in one category relative to being in a designated category (last category), for all pairs of categories. It is used for nominal responses. A maximum likelihood estimation (MLE) approach is used for baseline category logit model. To model an ordinal response variable one models the cumulative response probabilities or cumulative odds. The cumulative logit model is used when the response of an individual unit is restricted to one of a finite number of ordinal values. This study shows the practicality of multinomial logit model in analyzing epidemiological data. Other studies have found education to be associated with multiple sexual partners. In this study, we observed that multiple sexual partners is not related to education. Other covariates like Gender, Place of residence, sexually active individuals for the past 12 months and marital status were found to be associated with multiple sexual partners. Individuals that are sexually active for the past 12 months were found to be ten times more likely to have multiple sexual partners compared to those that are not. After controlling for all other factors, the odds of male to female having multiple sexual partners doubled to 7.6 meaning male are almost 8 times likely to have multiple sexual partners compared to female. Partner testing or couples testing is a main strategy of national testing initiatives in Kenya. Respondents are encouraged to learn their test results with their partner.
Multinomial Logit Modeling of Factors Associated With Multiple Sexual Partners from the Kenya Aids Indicator Survey 2007
doi:10.11648/j.ajtas.20150403.23
American Journal of Theoretical and Applied Statistics
2015-05-16
© Science Publishing Group
Beryl Ang’iro
Samuel Mwalili
Josphat Kinyanjui
Multinomial Logit Modeling of Factors Associated With Multiple Sexual Partners from the Kenya Aids Indicator Survey 2007
4
3
177
177
2015-05-16
2015-05-16
10.11648/j.ajtas.20150403.23
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150403.23
© Science Publishing Group
Modeling Road Traffic Accident Injuries in Nairobi County: Model Comparison Approach
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150403.24
Road Traffic Accident (RTA) injuries, is a neglected cause of death and disability in Nairobi County. Nairobi County has the highest number of injury rates in Kenya, notably in the active age group of (15-29) years that constitutes approximately 40% of its population. This signifies the importance of properly analyzing traffic accident data and predicting injuries, not only to explore the underlying causes of RTA injuries but also to initiate appropriate safety and policy measures in the County. Thus the study modeled RTA injuries that occurred from 2002 to 2014 in Nairobi County using the Artificial Neural Networks (ANN). ANN is a powerful technique that has demonstrated considerable success in analyzing historical data to predict future trends. However the use of ANN in accidents analysis was found to be relatively new and rare and thus the negative binomial regression approach was utilized as the study’s baseline model. The empirical study results indicated that the ANN model outperformed the negative binomial model in its overall performance.
Road Traffic Accident (RTA) injuries, is a neglected cause of death and disability in Nairobi County. Nairobi County has the highest number of injury rates in Kenya, notably in the active age group of (15-29) years that constitutes approximately 40% of its population. This signifies the importance of properly analyzing traffic accident data and predicting injuries, not only to explore the underlying causes of RTA injuries but also to initiate appropriate safety and policy measures in the County. Thus the study modeled RTA injuries that occurred from 2002 to 2014 in Nairobi County using the Artificial Neural Networks (ANN). ANN is a powerful technique that has demonstrated considerable success in analyzing historical data to predict future trends. However the use of ANN in accidents analysis was found to be relatively new and rare and thus the negative binomial regression approach was utilized as the study’s baseline model. The empirical study results indicated that the ANN model outperformed the negative binomial model in its overall performance.
Modeling Road Traffic Accident Injuries in Nairobi County: Model Comparison Approach
doi:10.11648/j.ajtas.20150403.24
American Journal of Theoretical and Applied Statistics
2015-05-27
© Science Publishing Group
Julius Nyerere Odhiambo
Anthony Kibira Wanjoya
Anthony Gichuhi Waititu
Modeling Road Traffic Accident Injuries in Nairobi County: Model Comparison Approach
4
3
184
184
2015-05-27
2015-05-27
10.11648/j.ajtas.20150403.24
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150403.24
© Science Publishing Group
Statistical Analysis of Determinants of Maternal Institutional Delivery Service Utilization in Ethiopia
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150403.11
Utilization of maternal healthcare service is a proximate determinant of maternal morbidities and mortalities. On the way to improve maternal health care service, it is important to understand factors influencing maternal health care service utilization. The main purpose of this study is to determine statistically the correlates of maternal place of delivery in Ethiopia, using 2011 EDHS data. The overall frequency of maternal delivery at heath facility in Ethiopia was 12.8%. Logistic regression model was used to model the effects of selected socioeconomic and demographic covariates. It was found that the covariates place of residence, birth order of child, mother’s age at child birth, mother’s educational level, household economic status and mothers employment status were the most important determinants of mothers place of delivery in the Ethiopia. It is suggested that to improve maternal healthcare service utilization, maternal education should be supported as a policy and this could be achieved through female literacy programs in the country.
Utilization of maternal healthcare service is a proximate determinant of maternal morbidities and mortalities. On the way to improve maternal health care service, it is important to understand factors influencing maternal health care service utilization. The main purpose of this study is to determine statistically the correlates of maternal place of delivery in Ethiopia, using 2011 EDHS data. The overall frequency of maternal delivery at heath facility in Ethiopia was 12.8%. Logistic regression model was used to model the effects of selected socioeconomic and demographic covariates. It was found that the covariates place of residence, birth order of child, mother’s age at child birth, mother’s educational level, household economic status and mothers employment status were the most important determinants of mothers place of delivery in the Ethiopia. It is suggested that to improve maternal healthcare service utilization, maternal education should be supported as a policy and this could be achieved through female literacy programs in the country.
Statistical Analysis of Determinants of Maternal Institutional Delivery Service Utilization in Ethiopia
doi:10.11648/j.ajtas.20150403.11
American Journal of Theoretical and Applied Statistics
2015-03-24
© Science Publishing Group
Kasahun Takele Geneti
Statistical Analysis of Determinants of Maternal Institutional Delivery Service Utilization in Ethiopia
4
3
77
77
2015-03-24
2015-03-24
10.11648/j.ajtas.20150403.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150403.11
© Science Publishing Group
Comparative Study of Efficiency of Integer Programming, Simplex Method and Transportation Method in Linear Programming Problem (LPP)
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150403.13
In this paper, we present a Linear Programming Problem (LPP) to minimize the cost of transportation of NBC, PLC products from three distribution centres to ten depots. Three methods of analysis were considered namely: Integer Programming, simplex method and transportation method via computer packages. The result of the analysis revealed that, the cost of transportation from these distribution centres to all the 10 depots are the same. That is, the optimal cost is N9, 127, 776.
In this paper, we present a Linear Programming Problem (LPP) to minimize the cost of transportation of NBC, PLC products from three distribution centres to ten depots. Three methods of analysis were considered namely: Integer Programming, simplex method and transportation method via computer packages. The result of the analysis revealed that, the cost of transportation from these distribution centres to all the 10 depots are the same. That is, the optimal cost is N9, 127, 776.
Comparative Study of Efficiency of Integer Programming, Simplex Method and Transportation Method in Linear Programming Problem (LPP)
doi:10.11648/j.ajtas.20150403.13
American Journal of Theoretical and Applied Statistics
2015-03-31
© Science Publishing Group
Ayansola Olufemi Aderemi
Oyenuga Iyabode Favour
Abimbola Latifat Adebisi
Comparative Study of Efficiency of Integer Programming, Simplex Method and Transportation Method in Linear Programming Problem (LPP)
4
3
88
88
2015-03-31
2015-03-31
10.11648/j.ajtas.20150403.13
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150403.13
© Science Publishing Group
Robust Linear Regression Using L1-Penalized MM-Estimation for High Dimensional Data
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150403.12
Large datasets, where the number of predictors p is larger than the sample sizes n, have become very popular in recent years. These datasets pose great challenges for building a linear good prediction model. In addition, when dataset contains a fraction of outliers and other contaminations, linear regression becomes a difficult problem. Therefore, we need methods that are sparse and robust at the same time. In this paper, we implemented the approach of MM estimation and proposed L1-Penalized MM-estimation (MM-Lasso). Our proposed estimator combining sparse LTS sparse estimator to penalized M-estimators to get sparse model estimation with high breakdown value and good prediction. We implemented MM-Lasso by using C programming language. Simulation study demonstrates the favorable prediction performance of MM-Lasso.
Large datasets, where the number of predictors p is larger than the sample sizes n, have become very popular in recent years. These datasets pose great challenges for building a linear good prediction model. In addition, when dataset contains a fraction of outliers and other contaminations, linear regression becomes a difficult problem. Therefore, we need methods that are sparse and robust at the same time. In this paper, we implemented the approach of MM estimation and proposed L1-Penalized MM-estimation (MM-Lasso). Our proposed estimator combining sparse LTS sparse estimator to penalized M-estimators to get sparse model estimation with high breakdown value and good prediction. We implemented MM-Lasso by using C programming language. Simulation study demonstrates the favorable prediction performance of MM-Lasso.
Robust Linear Regression Using L1-Penalized MM-Estimation for High Dimensional Data
doi:10.11648/j.ajtas.20150403.12
American Journal of Theoretical and Applied Statistics
2015-03-30
© Science Publishing Group
Kamal Darwish
Ali Hakan Buyuklu
Robust Linear Regression Using L1-Penalized MM-Estimation for High Dimensional Data
4
3
84
84
2015-03-30
2015-03-30
10.11648/j.ajtas.20150403.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150403.12
© Science Publishing Group
Modeling Panel Data: Comparison of GLS Estimation and Robust Covariance Matrix Estimation
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150403.25
The proliferation of panel studies which has been greatly motivated by the availability of data and greater capacity for modeling the complexity of human behavior than a single cross-section or time series data has led to the rise of challenging methodologies for estimating the data sets. Much controversy on these methodologies is the under-estimation of the standard errors leading to wrong conclusions of the involved hypothesis test as well as making inappropriate inference to the underlying model parameters. This is due to the heteroscedasticity and autocorrelation nature of the disturbance term in the classical linear regression model. This study sought to estimate linear-panel model parameters using conventional regression techniques which have the capacity to address the correlation and heteroscedasticity problem. By relaxing the homogeneity and non-correlation properties of the disturbance term in the classical linear regression model, we employed the generalized least squares method to estimate the model parameters. Using the available White Heteroscedasticity Consistent estimators i.e. HC0, HC1, HC2, HC3 and HC4, we also obtained estimates for the generalized ordinary least squares covariance matrix.
The proliferation of panel studies which has been greatly motivated by the availability of data and greater capacity for modeling the complexity of human behavior than a single cross-section or time series data has led to the rise of challenging methodologies for estimating the data sets. Much controversy on these methodologies is the under-estimation of the standard errors leading to wrong conclusions of the involved hypothesis test as well as making inappropriate inference to the underlying model parameters. This is due to the heteroscedasticity and autocorrelation nature of the disturbance term in the classical linear regression model. This study sought to estimate linear-panel model parameters using conventional regression techniques which have the capacity to address the correlation and heteroscedasticity problem. By relaxing the homogeneity and non-correlation properties of the disturbance term in the classical linear regression model, we employed the generalized least squares method to estimate the model parameters. Using the available White Heteroscedasticity Consistent estimators i.e. HC0, HC1, HC2, HC3 and HC4, we also obtained estimates for the generalized ordinary least squares covariance matrix.
Modeling Panel Data: Comparison of GLS Estimation and Robust Covariance Matrix Estimation
doi:10.11648/j.ajtas.20150403.25
American Journal of Theoretical and Applied Statistics
2015-05-28
© Science Publishing Group
Victor Muthama Musau
Anthony Gichuhi Waititu
Anthony Kibira Wanjoya
Modeling Panel Data: Comparison of GLS Estimation and Robust Covariance Matrix Estimation
4
3
191
191
2015-05-28
2015-05-28
10.11648/j.ajtas.20150403.25
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150403.25
© Science Publishing Group
Comparison of Methods of Handling Missing Data: A Case Study of KDHS 2010 Data
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150403.26
Missing data poses a major threat to observational and experimental studies. Analysis of data having ignored missingness results to estimates that are inefficient and unbiased. Various researches have been done to determine the best methods of dealing with missing data. The analysis used in these researches involved simulating missing data from complete data. Missing data are then imputed using the various methods, and the best method is arrived at by looking at the biasness of the imputed estimates, from the complete data estimates and the magnitude of standard errors. This study aimed at establishing the best method of dealing with missing data, based on the goodness of fit tests. The study made use of data from KDHS 2010. The overall rate of missingness was about 80%. The missing data mechanism was tested and proved to be MAR. The missing data was then imputed using Expectation Maximization Algorithm and Multiple Imputation. Later, logistic models were fitted to both datasets. Afterwards, goodness of fit tests were carried out to determine which of the two methods was the better method for imputing data. These tests were the AIC, Root Mean Square Error of Approximation (RMSEA) and Cox and Snell’s R-Squared. The predictive ability of the two models was also examined using confusion matrices and the area under receiver operation curve (AUROC). From these tests, multiple imputation was seen to be the better method of imputation since logistic regression model fitted the data better as compared to data imputed using expectation maximization. From the results of the study, the researchers recommend that the type of missingness present in data should be examined. If the amount of missing data is large, and the data is MAR, then data should be imputed using multiple imputation before any inference are made. The researchers suggested more research to be done to determine the maximum rate of missing data that should be imputed.
Missing data poses a major threat to observational and experimental studies. Analysis of data having ignored missingness results to estimates that are inefficient and unbiased. Various researches have been done to determine the best methods of dealing with missing data. The analysis used in these researches involved simulating missing data from complete data. Missing data are then imputed using the various methods, and the best method is arrived at by looking at the biasness of the imputed estimates, from the complete data estimates and the magnitude of standard errors. This study aimed at establishing the best method of dealing with missing data, based on the goodness of fit tests. The study made use of data from KDHS 2010. The overall rate of missingness was about 80%. The missing data mechanism was tested and proved to be MAR. The missing data was then imputed using Expectation Maximization Algorithm and Multiple Imputation. Later, logistic models were fitted to both datasets. Afterwards, goodness of fit tests were carried out to determine which of the two methods was the better method for imputing data. These tests were the AIC, Root Mean Square Error of Approximation (RMSEA) and Cox and Snell’s R-Squared. The predictive ability of the two models was also examined using confusion matrices and the area under receiver operation curve (AUROC). From these tests, multiple imputation was seen to be the better method of imputation since logistic regression model fitted the data better as compared to data imputed using expectation maximization. From the results of the study, the researchers recommend that the type of missingness present in data should be examined. If the amount of missing data is large, and the data is MAR, then data should be imputed using multiple imputation before any inference are made. The researchers suggested more research to be done to determine the maximum rate of missing data that should be imputed.
Comparison of Methods of Handling Missing Data: A Case Study of KDHS 2010 Data
doi:10.11648/j.ajtas.20150403.26
American Journal of Theoretical and Applied Statistics
2015-05-29
© Science Publishing Group
Shelmith Nyagathiri Kariuki
Anthony Waititu Gichuhi
Anthony Kibira Wanjoya
Comparison of Methods of Handling Missing Data: A Case Study of KDHS 2010 Data
4
3
200
200
2015-05-29
2015-05-29
10.11648/j.ajtas.20150403.26
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150403.26
© Science Publishing Group
A Design Unbiased Variance Estimator of the Systematic Sample Means
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150403.27
Systematic sampling is normally used in surveys of finite populations because of its appealing simplicity and efficiency. When properly applied, it can reflect stratification in the population and thus can be more precise than SRS. In systematic sampling technique, the sampling units are evenly spread over the whole population. This sampling scheme is very sensitive to correlation between units in the entire population. A positive autocorrelation reduces the precision while a negative autocorrelation will improve the precision compared to simple random sampling. The limitation of this sampling method is that, it is not possible to estimate the design variance that is unbiased. This study proposes an estimator for the design variance based on a non-parametric model for the population using local polynomial regression as the estimation technique. The non-parametric model is more flexible that it can hold for many practical situations. A simulation study is performed to enable the comparison of the efficiency of the proposed estimator to the existing ones. The performance measures used include: Relative Bias (RB) and Mean Square Error (MSE). From the simulation results, it can be seen that local polynomial estimator based on nonparametric model is consistent and design unbiased for the variance of systematic sample mean. The simulation study gave smaller values for the relative biases and mean squared errors for proposed estimator.
Systematic sampling is normally used in surveys of finite populations because of its appealing simplicity and efficiency. When properly applied, it can reflect stratification in the population and thus can be more precise than SRS. In systematic sampling technique, the sampling units are evenly spread over the whole population. This sampling scheme is very sensitive to correlation between units in the entire population. A positive autocorrelation reduces the precision while a negative autocorrelation will improve the precision compared to simple random sampling. The limitation of this sampling method is that, it is not possible to estimate the design variance that is unbiased. This study proposes an estimator for the design variance based on a non-parametric model for the population using local polynomial regression as the estimation technique. The non-parametric model is more flexible that it can hold for many practical situations. A simulation study is performed to enable the comparison of the efficiency of the proposed estimator to the existing ones. The performance measures used include: Relative Bias (RB) and Mean Square Error (MSE). From the simulation results, it can be seen that local polynomial estimator based on nonparametric model is consistent and design unbiased for the variance of systematic sample mean. The simulation study gave smaller values for the relative biases and mean squared errors for proposed estimator.
A Design Unbiased Variance Estimator of the Systematic Sample Means
doi:10.11648/j.ajtas.20150403.27
American Journal of Theoretical and Applied Statistics
2015-05-30
© Science Publishing Group
Festus A. Were
George Orwa
Romanus Odhiambo
A Design Unbiased Variance Estimator of the Systematic Sample Means
4
3
210
210
2015-05-30
2015-05-30
10.11648/j.ajtas.20150403.27
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150403.27
© Science Publishing Group
Research on Beijing Total Logistics Demand Prediction Based on Grey Prediction Model
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150403.28
As an important part of regional logistics demand forecasting in logistics system planning, in order to develop regional logistics development policy, planning and construction of logistics infrastructure, provide the basis for the necessary for analysis of the logistics market development trend. Due to the development of China's logistics industry is still in its infancy stage, various statistical data of logistics demand forecast needs is not complete, in the limited sample data, how to make a reasonable demand forecast for regional logistics has become a very important research topic. Beijing as Chinese political economic center, is the domestic cargo turnover, and an important hub for import and export, As the core of the Bohai Sea Economic Circle, Beijing in the regional economic development as a transport hub and logistics channel plays a more and more important role, therefore, should be on the Beijing logistics demand forecast analysis. This paper analyzes the present situation of Beijing logistics development, starting from the total economic output, economic structure, economic location and other aspects, basic economic situation of Beijing is analyzed. From the transportation infrastructure construction present situation, the current status of development of logistics industry, logistics enterprises are analyzed in terms of status and problems of Beijing logistics development; then further analysis of Beijing logistics development environment, all of these indicate that it is very necessary for Beijing logistics demand forecast. Using the econometric model to analyze and forecast the total demand analysis of Beijing logistics, discusses the influencing factors of Beijing logistics demand, thus the construction of index system of logistics demand forecast, and selects freight, freight turnover as a quantitative index to measure the total quantity of logistics demand, using the Eviews model and the analysis, obtains the Beijing logistics demand presents the fast growth to the situation in the future five years.
As an important part of regional logistics demand forecasting in logistics system planning, in order to develop regional logistics development policy, planning and construction of logistics infrastructure, provide the basis for the necessary for analysis of the logistics market development trend. Due to the development of China's logistics industry is still in its infancy stage, various statistical data of logistics demand forecast needs is not complete, in the limited sample data, how to make a reasonable demand forecast for regional logistics has become a very important research topic. Beijing as Chinese political economic center, is the domestic cargo turnover, and an important hub for import and export, As the core of the Bohai Sea Economic Circle, Beijing in the regional economic development as a transport hub and logistics channel plays a more and more important role, therefore, should be on the Beijing logistics demand forecast analysis. This paper analyzes the present situation of Beijing logistics development, starting from the total economic output, economic structure, economic location and other aspects, basic economic situation of Beijing is analyzed. From the transportation infrastructure construction present situation, the current status of development of logistics industry, logistics enterprises are analyzed in terms of status and problems of Beijing logistics development; then further analysis of Beijing logistics development environment, all of these indicate that it is very necessary for Beijing logistics demand forecast. Using the econometric model to analyze and forecast the total demand analysis of Beijing logistics, discusses the influencing factors of Beijing logistics demand, thus the construction of index system of logistics demand forecast, and selects freight, freight turnover as a quantitative index to measure the total quantity of logistics demand, using the Eviews model and the analysis, obtains the Beijing logistics demand presents the fast growth to the situation in the future five years.
Research on Beijing Total Logistics Demand Prediction Based on Grey Prediction Model
doi:10.11648/j.ajtas.20150403.28
American Journal of Theoretical and Applied Statistics
2015-06-02
© Science Publishing Group
Jie Zhu
Hong Zhang
Li Zhou
Research on Beijing Total Logistics Demand Prediction Based on Grey Prediction Model
4
3
222
222
2015-06-02
2015-06-02
10.11648/j.ajtas.20150403.28
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150403.28
© Science Publishing Group
An Empirical Analysis of Queuing Model and Queuing Behaviour in Relation to Customer Satisfaction at Jkuat Students Finance Office
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150404.12
Over the years, the population of the university has increased with the introduction of double intake system which in turn has led to long waiting times and long queues in students finance department, due to few service stations, inefficiencies in the payment system used and students being disorderly. To enhance service delivery, a proper queuing system is needed. This is achieved by putting in place proper measures to ensure a good flow of students at the service counters. Focusing only on the main queue we collect data and do an empirical analysis of the model in use. Using queuing theory principles and formulas the study showed that on average 22 customers arrive every hour and the service rate is 23.7 customers per hour. The system utilization factor was 92.95%, the probability of zero customers waiting 7.05; average number of customers waiting is 12.252 and average waiting time 33.415 min. The study compared the single server model against multi-server model and concluded that M/M/1 model was not the best for the Finance department. Using a questionnaire of 384 respondents, the study found out that almost all customers are not satisfied about the nature of waiting lines and some students have turned away at regular occasions due to the long queues. The time students wait to be served should not be overlooked; constant check for their changing needs and improvement in the time spent when serving them has been emphasized by the study. In today’s competitive business environment, the modern society is progressively turning into a service dominating one. Customer satisfaction and service operation capabilities have given an organization a competitive advantage in the marketplace and this has consequently led to an increasing importance in service operations management. As a result, waiting has drawn great attention to all business operation management specialists.
Over the years, the population of the university has increased with the introduction of double intake system which in turn has led to long waiting times and long queues in students finance department, due to few service stations, inefficiencies in the payment system used and students being disorderly. To enhance service delivery, a proper queuing system is needed. This is achieved by putting in place proper measures to ensure a good flow of students at the service counters. Focusing only on the main queue we collect data and do an empirical analysis of the model in use. Using queuing theory principles and formulas the study showed that on average 22 customers arrive every hour and the service rate is 23.7 customers per hour. The system utilization factor was 92.95%, the probability of zero customers waiting 7.05; average number of customers waiting is 12.252 and average waiting time 33.415 min. The study compared the single server model against multi-server model and concluded that M/M/1 model was not the best for the Finance department. Using a questionnaire of 384 respondents, the study found out that almost all customers are not satisfied about the nature of waiting lines and some students have turned away at regular occasions due to the long queues. The time students wait to be served should not be overlooked; constant check for their changing needs and improvement in the time spent when serving them has been emphasized by the study. In today’s competitive business environment, the modern society is progressively turning into a service dominating one. Customer satisfaction and service operation capabilities have given an organization a competitive advantage in the marketplace and this has consequently led to an increasing importance in service operations management. As a result, waiting has drawn great attention to all business operation management specialists.
An Empirical Analysis of Queuing Model and Queuing Behaviour in Relation to Customer Satisfaction at Jkuat Students Finance Office
doi:10.11648/j.ajtas.20150404.12
American Journal of Theoretical and Applied Statistics
2015-06-02
© Science Publishing Group
Sammy Kariuki Mwangi
Thomas Mageto Ombuni
An Empirical Analysis of Queuing Model and Queuing Behaviour in Relation to Customer Satisfaction at Jkuat Students Finance Office
4
4
246
246
2015-06-02
2015-06-02
10.11648/j.ajtas.20150404.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150404.12
© Science Publishing Group
Modelling of Credit Risk: Random Forests versus Cox Proportional Hazard Regression
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150404.13
In survival analysis several regression modeling strategies can be applied to predict the risk of future events. Often, however, the default choice of analysis tends to rely on Cox regression modeling due to its convenience. Extensions of the random forest approach to survival analysis provide an alternative way to build a risk prediction model. This paper discusses the two approaches in reference to credit management and compares the impact and results of both methods. The Cox Proportional Hazard model displayed a better performance than that of Random Survival Forest when estimating credit risk.
In survival analysis several regression modeling strategies can be applied to predict the risk of future events. Often, however, the default choice of analysis tends to rely on Cox regression modeling due to its convenience. Extensions of the random forest approach to survival analysis provide an alternative way to build a risk prediction model. This paper discusses the two approaches in reference to credit management and compares the impact and results of both methods. The Cox Proportional Hazard model displayed a better performance than that of Random Survival Forest when estimating credit risk.
Modelling of Credit Risk: Random Forests versus Cox Proportional Hazard Regression
doi:10.11648/j.ajtas.20150404.13
American Journal of Theoretical and Applied Statistics
2015-06-02
© Science Publishing Group
Dyana Kwamboka Mageto
Samuel Musili Mwalili
Anthony Gichuhi Waititu
Modelling of Credit Risk: Random Forests versus Cox Proportional Hazard Regression
4
4
253
253
2015-06-02
2015-06-02
10.11648/j.ajtas.20150404.13
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150404.13
© Science Publishing Group
Exponentially Weighted Moving Average Control Charts for Monitoring Ambient Ozone Levels in Muscat
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150404.14
Exponentially Weighted Moving Average (EWMA) control charts are proposed to monitor ambient ozone (O3) levels in the city center and industrial areas of Muscat. Weekly averages of 8-hourly concentrations of ozone over a period of one year were used. The EWMA charts showed significant shift in the mean ozone levels at both the sites. However, both the ozone series were found to have significant autocorrelation. Therefore Box-Jenkins autoregressive integrated moving average (ARIMA) models were fitted at the first stage and then residuals were taken to apply EWMA which revealed that the ozone levels in both areas are within natural tolerance limits as well as within the international standard limit.
Exponentially Weighted Moving Average (EWMA) control charts are proposed to monitor ambient ozone (O3) levels in the city center and industrial areas of Muscat. Weekly averages of 8-hourly concentrations of ozone over a period of one year were used. The EWMA charts showed significant shift in the mean ozone levels at both the sites. However, both the ozone series were found to have significant autocorrelation. Therefore Box-Jenkins autoregressive integrated moving average (ARIMA) models were fitted at the first stage and then residuals were taken to apply EWMA which revealed that the ozone levels in both areas are within natural tolerance limits as well as within the international standard limit.
Exponentially Weighted Moving Average Control Charts for Monitoring Ambient Ozone Levels in Muscat
doi:10.11648/j.ajtas.20150404.14
American Journal of Theoretical and Applied Statistics
2015-06-02
© Science Publishing Group
Muhammad Idrees Ahmad
Exponentially Weighted Moving Average Control Charts for Monitoring Ambient Ozone Levels in Muscat
4
4
257
257
2015-06-02
2015-06-02
10.11648/j.ajtas.20150404.14
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150404.14
© Science Publishing Group
Statistical Analysis of Compartment Permeability Influence on Damaged Ship Motions
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150404.11
The study shows the changes in heave and pitch motions when a ship hull is damaged with varying compartment permeability values using wave statistics. It is an experimental investigation into obtaining motion measurements of an intact and damaged frigate model in waves. Experiments were carried out using the Southampton Solent University towing tank facility and a 1/43.62 scale segmented frigate model of the Leander Class Frigates Hull. During the experiment wave length/wave height was kept constant whilst wave frequencies were varied from 3.173 to 6.276 rad/s. The tests were carried out with the model stationary and in motion with a forward speed of 1.4ms-1. Results of the tests indicated that compartment permeability has a non-linear effect on heave and pitch motions of a damaged ship.
The study shows the changes in heave and pitch motions when a ship hull is damaged with varying compartment permeability values using wave statistics. It is an experimental investigation into obtaining motion measurements of an intact and damaged frigate model in waves. Experiments were carried out using the Southampton Solent University towing tank facility and a 1/43.62 scale segmented frigate model of the Leander Class Frigates Hull. During the experiment wave length/wave height was kept constant whilst wave frequencies were varied from 3.173 to 6.276 rad/s. The tests were carried out with the model stationary and in motion with a forward speed of 1.4ms-1. Results of the tests indicated that compartment permeability has a non-linear effect on heave and pitch motions of a damaged ship.
Statistical Analysis of Compartment Permeability Influence on Damaged Ship Motions
doi:10.11648/j.ajtas.20150404.11
American Journal of Theoretical and Applied Statistics
2015-06-02
© Science Publishing Group
Domeh Daniel Vindex Kwabla
Lartey David
Statistical Analysis of Compartment Permeability Influence on Damaged Ship Motions
4
4
232
232
2015-06-02
2015-06-02
10.11648/j.ajtas.20150404.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150404.11
© Science Publishing Group
Study on Financial Market Risk Measurement Based on Asymmetric Laplace Distribution
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150404.16
In this paper, According to the returns distributions (of the financial assets returns series) with peak fat-tailed and asymmetric and the theory of Asymmetric Laplace distribution.AL-VaR (AL-CVaR) parametric method and Monte Carlo simulation are proposed which are based on Asymmetric Laplace distribution. We analyze the VaR (CVaR) measuring model of AL distribution and discuss its backtesting. And then we evaluate the pros and cons of each method combining with the characteristics of the stock market risk of three countries. (America、 China and Japan).
In this paper, According to the returns distributions (of the financial assets returns series) with peak fat-tailed and asymmetric and the theory of Asymmetric Laplace distribution.AL-VaR (AL-CVaR) parametric method and Monte Carlo simulation are proposed which are based on Asymmetric Laplace distribution. We analyze the VaR (CVaR) measuring model of AL distribution and discuss its backtesting. And then we evaluate the pros and cons of each method combining with the characteristics of the stock market risk of three countries. (America、 China and Japan).
Study on Financial Market Risk Measurement Based on Asymmetric Laplace Distribution
doi:10.11648/j.ajtas.20150404.16
American Journal of Theoretical and Applied Statistics
2015-06-08
© Science Publishing Group
Hong Zhang
Li Zhou
Jie Zhu
Study on Financial Market Risk Measurement Based on Asymmetric Laplace Distribution
4
4
268
268
2015-06-08
2015-06-08
10.11648/j.ajtas.20150404.16
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150404.16
© Science Publishing Group
Principal Component and Principal Axis Factoring of Factors Associated with High Population in Urban Areas: A Case Study of Juja and Thika, Kenya
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150404.15
Development in the world / a country today is being influenced by the population in urban areas as a result of which living standards rise in all parts of the country despite the rural areas. The main goal of our government today is to balance development of urban and rural areas of Kenya so that no areas are left behind as others head forward in terms of development.. In this research, PCA and PAF methods of factor reduction were applied. PCA is a widely used method for factor extraction. Factor weights are computed in order to extract the maximum possible variance, with successive factoring continuing until there is no further meaningful variance left. The factor model is then rotated for analysis. PAF restricts the variance that is common among variables. It does not redistribute the variance that is unique to any one variable. Parallel analysis, catell's scree test criterion and Eigen value rule were applied. Results indicated that parallel analysis was generally the best the scree test was generally accurate while the Kaiser's method tended to overestimate the number of components. In this research, business and employment were deduced as major factors associated with high population in the two towns. Amenities like telephone networks, markets were also associated with high population in the two towns. I recommend the Kenyan government to apply the knowledge of PCA and PAF to determine the major reasons associated with high population in other major urban areas (towns and cities) especially according to 2009 population and housing census results so as to assist in allocation of revenue in the now current devolution system of government. This will ensure no areas (counties) are left behind in terms of development. The government should strive to provide social amenities and utilities in the rural areas. It should also provide jobs to the citizens in the rural areas so as to prevent very high increase in urban areas. The people in rural areas can also hold vocational training on self employment being headed by the government. PAF method demonstrated better results than the PCA since it took good care of measurement errors. PAF method was also able to recover weaker factors than PCA could. PAF removed the unique and error variance and so its results were much more reliable.PAF was also preferred because it accounted for the co-variation whereas PCA accounted for the total variance.
Development in the world / a country today is being influenced by the population in urban areas as a result of which living standards rise in all parts of the country despite the rural areas. The main goal of our government today is to balance development of urban and rural areas of Kenya so that no areas are left behind as others head forward in terms of development.. In this research, PCA and PAF methods of factor reduction were applied. PCA is a widely used method for factor extraction. Factor weights are computed in order to extract the maximum possible variance, with successive factoring continuing until there is no further meaningful variance left. The factor model is then rotated for analysis. PAF restricts the variance that is common among variables. It does not redistribute the variance that is unique to any one variable. Parallel analysis, catell's scree test criterion and Eigen value rule were applied. Results indicated that parallel analysis was generally the best the scree test was generally accurate while the Kaiser's method tended to overestimate the number of components. In this research, business and employment were deduced as major factors associated with high population in the two towns. Amenities like telephone networks, markets were also associated with high population in the two towns. I recommend the Kenyan government to apply the knowledge of PCA and PAF to determine the major reasons associated with high population in other major urban areas (towns and cities) especially according to 2009 population and housing census results so as to assist in allocation of revenue in the now current devolution system of government. This will ensure no areas (counties) are left behind in terms of development. The government should strive to provide social amenities and utilities in the rural areas. It should also provide jobs to the citizens in the rural areas so as to prevent very high increase in urban areas. The people in rural areas can also hold vocational training on self employment being headed by the government. PAF method demonstrated better results than the PCA since it took good care of measurement errors. PAF method was also able to recover weaker factors than PCA could. PAF removed the unique and error variance and so its results were much more reliable.PAF was also preferred because it accounted for the co-variation whereas PCA accounted for the total variance.
Principal Component and Principal Axis Factoring of Factors Associated with High Population in Urban Areas: A Case Study of Juja and Thika, Kenya
doi:10.11648/j.ajtas.20150404.15
American Journal of Theoretical and Applied Statistics
2015-06-05
© Science Publishing Group
Josephine Njeri Ngure
J. M. Kihoro
Anthony Waititu
Principal Component and Principal Axis Factoring of Factors Associated with High Population in Urban Areas: A Case Study of Juja and Thika, Kenya
4
4
263
263
2015-06-05
2015-06-05
10.11648/j.ajtas.20150404.15
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150404.15
© Science Publishing Group
Bayesian Semi-Parametric Regression Analysis of Childhood Malnutrition in Gamo Gofa Zone: The Social and Economic Impact of Child Undernutrition
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150404.17
Major progress has been made over the last decades in reducing the prevalence of malnutrition amongst children less than 5 years of age in developing countries. Approximately 27% of children under the age of 5 in these countries are still malnourished. This work focuses on the childhood malnutrition in Gamo Gofa Zone, Ethiopia. This study examined the association between demographic and socioeconomic determinants and the malnutrition problem in children less than 5 years of age using Data obtained from both rural and urban sampled surveys conducted in sample Woredas from December 1 to January 5, 2013. The study on the Child undernutrition and underweight prevalence in Gamo Gofa has allowed us to quantify the negative impacts of child undernutrition in both social and economic terms. The results revealed that as many as 75% of all cases of child undernutrition and its related pathologies go untreated. It is also observed that about 35% of the health costs associated with undernutrition occur before the child turns 1 year-old. Generally, The results of the analysis show that place of residence, employment status of mother, employment status of partners, educational status of mothers, diarrhea, household economic level and source of drinking water were found to be the most important determinants of health/nutritional status of children. The study revealed that socio-economic, demographic and health and environmental variables have significant effects on the nutritional and health status of children in Ethiopia.
Major progress has been made over the last decades in reducing the prevalence of malnutrition amongst children less than 5 years of age in developing countries. Approximately 27% of children under the age of 5 in these countries are still malnourished. This work focuses on the childhood malnutrition in Gamo Gofa Zone, Ethiopia. This study examined the association between demographic and socioeconomic determinants and the malnutrition problem in children less than 5 years of age using Data obtained from both rural and urban sampled surveys conducted in sample Woredas from December 1 to January 5, 2013. The study on the Child undernutrition and underweight prevalence in Gamo Gofa has allowed us to quantify the negative impacts of child undernutrition in both social and economic terms. The results revealed that as many as 75% of all cases of child undernutrition and its related pathologies go untreated. It is also observed that about 35% of the health costs associated with undernutrition occur before the child turns 1 year-old. Generally, The results of the analysis show that place of residence, employment status of mother, employment status of partners, educational status of mothers, diarrhea, household economic level and source of drinking water were found to be the most important determinants of health/nutritional status of children. The study revealed that socio-economic, demographic and health and environmental variables have significant effects on the nutritional and health status of children in Ethiopia.
Bayesian Semi-Parametric Regression Analysis of Childhood Malnutrition in Gamo Gofa Zone: The Social and Economic Impact of Child Undernutrition
doi:10.11648/j.ajtas.20150404.17
American Journal of Theoretical and Applied Statistics
2015-06-19
© Science Publishing Group
Tilahun Ferede Asena
Derbachew Asfaw Teni
Bayesian Semi-Parametric Regression Analysis of Childhood Malnutrition in Gamo Gofa Zone: The Social and Economic Impact of Child Undernutrition
4
4
276
276
2015-06-19
2015-06-19
10.11648/j.ajtas.20150404.17
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150404.17
© Science Publishing Group
A Simple Conditional Approach for Generating Spatial Correlated Binary Data
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150404.21
Generating a spatial random field in which the observations are binary random variables with a particular covariance function may be impossible, because there are restrictions on the parameters of Bernoulli variables. This paper develops a conditional method based from spatial GLMM for generating spatial correlated binary data, which can generate spatial correlated binary data, with the variograms of the simulated data are similar to the variograms of the corresponding latent Gaussian random field. However, the closed form for their spatial correlation is not available specifically.
Generating a spatial random field in which the observations are binary random variables with a particular covariance function may be impossible, because there are restrictions on the parameters of Bernoulli variables. This paper develops a conditional method based from spatial GLMM for generating spatial correlated binary data, which can generate spatial correlated binary data, with the variograms of the simulated data are similar to the variograms of the corresponding latent Gaussian random field. However, the closed form for their spatial correlation is not available specifically.
A Simple Conditional Approach for Generating Spatial Correlated Binary Data
doi:10.11648/j.ajtas.20150404.21
American Journal of Theoretical and Applied Statistics
2015-07-17
© Science Publishing Group
Renhao Jin
Tao Liu
Fang Yan
Jie Zhu
A Simple Conditional Approach for Generating Spatial Correlated Binary Data
4
4
311
311
2015-07-17
2015-07-17
10.11648/j.ajtas.20150404.21
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150404.21
© Science Publishing Group
Spatial Correlation Analysis of 2013 Per capita GDP in the Area of Beijing, Tianjin and Hebei
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150404.22
This paper is based on the Moran's I coefficient and Geary's c coefficient, and with the support of SAS statistical analysis software, using the spatial analysis of Beijing-Tianjin-Hebei’s per capita GDP and Geographical coordinates together. The research results show that the Moran's I coefficient is 0.098, Geary's c coefficient is 0.868, which is showing that there is a positive correlation between Beijing-Tianjin- Hebei region’s city economy. But the degree of correlation is low, which is showing that Beijing-Tianj-Hebei collaborative development is still in the initial stage, and regional economic integration has not fully realized.
This paper is based on the Moran's I coefficient and Geary's c coefficient, and with the support of SAS statistical analysis software, using the spatial analysis of Beijing-Tianjin-Hebei’s per capita GDP and Geographical coordinates together. The research results show that the Moran's I coefficient is 0.098, Geary's c coefficient is 0.868, which is showing that there is a positive correlation between Beijing-Tianjin- Hebei region’s city economy. But the degree of correlation is low, which is showing that Beijing-Tianj-Hebei collaborative development is still in the initial stage, and regional economic integration has not fully realized.
Spatial Correlation Analysis of 2013 Per capita GDP in the Area of Beijing, Tianjin and Hebei
doi:10.11648/j.ajtas.20150404.22
American Journal of Theoretical and Applied Statistics
2015-07-17
© Science Publishing Group
Renhao Jin
Tao Liu
Fang Yan
Jie Zhu
Spatial Correlation Analysis of 2013 Per capita GDP in the Area of Beijing, Tianjin and Hebei
4
4
316
316
2015-07-17
2015-07-17
10.11648/j.ajtas.20150404.22
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150404.22
© Science Publishing Group
Application of Response Surface Methodology for Optimization of Potato Tuber Yield
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150404.20
The Author investigates the operating conditions required for optimal production of potato tuber yield in Kenya. This will help potato farmers to safe extra cost of input in potato farming. The potato production process was optimized by the application of factorial design 23 and response surface methodology. The combined effects of water, Nitrogen and Phosphorus mineral nutrients were investigated and optimized using response surface methodology. It was found that the optimum production conditions for the potato tuber yield were 70.04% irrigation water, 124.75Kg/Ha of Nitrogen supplied as urea and 191.04Kg/Ha phosphorus supplied as triple super phosphate. At the optimum condition one can reach to a potato tuber yield of 19.36Kg/plot of 1.8meters by 2.25 meters. Increased productivity of potatoes can improve the livelihood of smallholder potato farmers in Kenya and safe the farmers extra cost of input. Finally, i hope that the approach applied in this study of potatoes can be useful for research on other commodities, leading to a better understanding of overall crop production.
The Author investigates the operating conditions required for optimal production of potato tuber yield in Kenya. This will help potato farmers to safe extra cost of input in potato farming. The potato production process was optimized by the application of factorial design 23 and response surface methodology. The combined effects of water, Nitrogen and Phosphorus mineral nutrients were investigated and optimized using response surface methodology. It was found that the optimum production conditions for the potato tuber yield were 70.04% irrigation water, 124.75Kg/Ha of Nitrogen supplied as urea and 191.04Kg/Ha phosphorus supplied as triple super phosphate. At the optimum condition one can reach to a potato tuber yield of 19.36Kg/plot of 1.8meters by 2.25 meters. Increased productivity of potatoes can improve the livelihood of smallholder potato farmers in Kenya and safe the farmers extra cost of input. Finally, i hope that the approach applied in this study of potatoes can be useful for research on other commodities, leading to a better understanding of overall crop production.
Application of Response Surface Methodology for Optimization of Potato Tuber Yield
doi:10.11648/j.ajtas.20150404.20
American Journal of Theoretical and Applied Statistics
2015-07-14
© Science Publishing Group
Dennis Kariuki Muriithi
Application of Response Surface Methodology for Optimization of Potato Tuber Yield
4
4
304
304
2015-07-14
2015-07-14
10.11648/j.ajtas.20150404.20
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150404.20
© Science Publishing Group
Discrete Time Semi-Markov Model of a Two Non-Identical Unit Cold Standby System with Preventive Maintenance with Three Modes
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150404.18
This paper presents the reliability and availability measures of a two non-identical unit cold standby redundant system (unit-1 is operating, and unit-2 is cold standby) using semi-Markov process under discrete parametric Markov-Chain i.e. failure and repair times of a unit and time to PM and PM time are taken as discrete random variables assuming three different modes (normal (N) mode, partial failure (P) mode and total failure (F) mode) of each unit. The unit-1 is sent for preventive maintenance (PM) after its working for a random period of time assuming that the failure and repair times of a unit and time to PM and PM time are taken as discrete random variables having geometric distributions with different parameters. A single repairman is available with the system for PM of unit-1 and repair of both units. The system is considered in up-state if only one or two units are operative or in partial failure (P) mode. After some basic definitions and notations, we obtain various measures of system effectiveness; reliability, availability, mean time to failure, busy period of repairman due to PM of unit-1, busy period of repairman due to repair of unit-1 and unit-2 from total failure, and the expected profit function using regenerative point technique. The mathematical problem thus developed has next been solved numerically and graphically represented by the aid of Maple program.
This paper presents the reliability and availability measures of a two non-identical unit cold standby redundant system (unit-1 is operating, and unit-2 is cold standby) using semi-Markov process under discrete parametric Markov-Chain i.e. failure and repair times of a unit and time to PM and PM time are taken as discrete random variables assuming three different modes (normal (N) mode, partial failure (P) mode and total failure (F) mode) of each unit. The unit-1 is sent for preventive maintenance (PM) after its working for a random period of time assuming that the failure and repair times of a unit and time to PM and PM time are taken as discrete random variables having geometric distributions with different parameters. A single repairman is available with the system for PM of unit-1 and repair of both units. The system is considered in up-state if only one or two units are operative or in partial failure (P) mode. After some basic definitions and notations, we obtain various measures of system effectiveness; reliability, availability, mean time to failure, busy period of repairman due to PM of unit-1, busy period of repairman due to repair of unit-1 and unit-2 from total failure, and the expected profit function using regenerative point technique. The mathematical problem thus developed has next been solved numerically and graphically represented by the aid of Maple program.
Discrete Time Semi-Markov Model of a Two Non-Identical Unit Cold Standby System with Preventive Maintenance with Three Modes
doi:10.11648/j.ajtas.20150404.18
American Journal of Theoretical and Applied Statistics
2015-06-30
© Science Publishing Group
Medhat Ahmed El-Damcese
Naglaa Hassan El-Sodany
Discrete Time Semi-Markov Model of a Two Non-Identical Unit Cold Standby System with Preventive Maintenance with Three Modes
4
4
290
290
2015-06-30
2015-06-30
10.11648/j.ajtas.20150404.18
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150404.18
© Science Publishing Group
The Log Normal and the Poisson Gravity Models in the Analysis of Interactions Phenomena
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150404.19
Three problems often encountered when bilateral interaction data are analyzed by means of the log-normal gravity model: the bias created by the logarithmic transformation, the failure of the homoscedasticity assumption and the treatment of zero valued flows. When the interaction are count data type that takes non-negative integer values, to overcome these problems the literature suggests to use a Poisson gravity model instead of log-normal model. In this paper, using a real interaction phenomenon a comparative analysis of the two models is carried out. The most important results obtained highlights that if the phenomenon is correctly specified, the two specification of the gravity model have a very similar behaviour.
Three problems often encountered when bilateral interaction data are analyzed by means of the log-normal gravity model: the bias created by the logarithmic transformation, the failure of the homoscedasticity assumption and the treatment of zero valued flows. When the interaction are count data type that takes non-negative integer values, to overcome these problems the literature suggests to use a Poisson gravity model instead of log-normal model. In this paper, using a real interaction phenomenon a comparative analysis of the two models is carried out. The most important results obtained highlights that if the phenomenon is correctly specified, the two specification of the gravity model have a very similar behaviour.
The Log Normal and the Poisson Gravity Models in the Analysis of Interactions Phenomena
doi:10.11648/j.ajtas.20150404.19
American Journal of Theoretical and Applied Statistics
2015-07-05
© Science Publishing Group
Giuseppe Ricciardo Lamonica
The Log Normal and the Poisson Gravity Models in the Analysis of Interactions Phenomena
4
4
299
299
2015-07-05
2015-07-05
10.11648/j.ajtas.20150404.19
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150404.19
© Science Publishing Group
Study of Multivariate Data Clustering Based on K-Means and Independent Component Analysis
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150405.11
For last two decades, clustering is well-recognized area in the research field of data mining. Data clustering plays the major research at pattern recognition, Signal processing, bioinformatics and Artificial Intelligence. Clustering process is an unsupervised learning techniques where it generates a group of object based on their similarity in such a way that the objects belonging to other groups are similar and those belonging to other are dissimilar. This paper analysis the three different data types clustering techniques like K-Means, Principal components analysis (PCA) and Independent component analysis (ICA) in real and simulated data. The recent developments by considering a rather unexpected application of the theory of Independent component analysis (ICA) found in data clustering, outlier detection and multivariate data visualization. Accurate identification of data clustering plays an important role in statistical analysis. In this paper we explore the connection among these three techniques to identify multivariate data clustering and develop a new method k-means on PCA or ICA then the result shows that ICA based clustering performs well than others.
For last two decades, clustering is well-recognized area in the research field of data mining. Data clustering plays the major research at pattern recognition, Signal processing, bioinformatics and Artificial Intelligence. Clustering process is an unsupervised learning techniques where it generates a group of object based on their similarity in such a way that the objects belonging to other groups are similar and those belonging to other are dissimilar. This paper analysis the three different data types clustering techniques like K-Means, Principal components analysis (PCA) and Independent component analysis (ICA) in real and simulated data. The recent developments by considering a rather unexpected application of the theory of Independent component analysis (ICA) found in data clustering, outlier detection and multivariate data visualization. Accurate identification of data clustering plays an important role in statistical analysis. In this paper we explore the connection among these three techniques to identify multivariate data clustering and develop a new method k-means on PCA or ICA then the result shows that ICA based clustering performs well than others.
Study of Multivariate Data Clustering Based on K-Means and Independent Component Analysis
doi:10.11648/j.ajtas.20150405.11
American Journal of Theoretical and Applied Statistics
2015-07-28
© Science Publishing Group
Md. Shamim Reza
Sabba Ruhi
Study of Multivariate Data Clustering Based on K-Means and Independent Component Analysis
4
5
321
321
2015-07-28
2015-07-28
10.11648/j.ajtas.20150405.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150405.11
© Science Publishing Group
Analysis of Determinants of Antenatal Care Services Utilization in Nairobi County Using Logistic Regression Model
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150405.12
Objectives: The aim of this study is to assess antenatal care service utilization and determine the factors associated with antenatal care non attendance in Nairobi County. Methods: The study used data that was collected in the county by use of questionnaires in which a total of 306 mothers participated. Data Analysis: The data was analyzed using R-software version 3.0.2, and the report was represented in form of tables. Here, Logistic regression model was used to model some of effects of the demographic and socio-economic independent variables. Results: The study found out that the independent variables, age, employment status, education level, parity and husband’s education level were the determinants of antenatal care service utilization in Nairobi County. The relationship between the covariates and antenatal care service utilization were significant at α=0.05 Conclusions: The study suggested that mothers in Nairobi County should be educated or enlightened on matters that concern antenatal health care utilization so as to increase the percentage of those mothers that attend the health facilities.
Objectives: The aim of this study is to assess antenatal care service utilization and determine the factors associated with antenatal care non attendance in Nairobi County. Methods: The study used data that was collected in the county by use of questionnaires in which a total of 306 mothers participated. Data Analysis: The data was analyzed using R-software version 3.0.2, and the report was represented in form of tables. Here, Logistic regression model was used to model some of effects of the demographic and socio-economic independent variables. Results: The study found out that the independent variables, age, employment status, education level, parity and husband’s education level were the determinants of antenatal care service utilization in Nairobi County. The relationship between the covariates and antenatal care service utilization were significant at α=0.05 Conclusions: The study suggested that mothers in Nairobi County should be educated or enlightened on matters that concern antenatal health care utilization so as to increase the percentage of those mothers that attend the health facilities.
Analysis of Determinants of Antenatal Care Services Utilization in Nairobi County Using Logistic Regression Model
doi:10.11648/j.ajtas.20150405.12
American Journal of Theoretical and Applied Statistics
2015-08-01
© Science Publishing Group
Kennedy Sakaya Barasa
Anthony Kibira Wanjoya
Anthony Gichuhi Waititu
Analysis of Determinants of Antenatal Care Services Utilization in Nairobi County Using Logistic Regression Model
4
5
328
328
2015-08-01
2015-08-01
10.11648/j.ajtas.20150405.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150405.12
© Science Publishing Group
Prediction Intervals for Progressive Type-II Right-Censored Order Statistics from Two Independent Sequences
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150405.13
This article discusses the problem of predicting future progressive Type-II right censored order statistics based on progressive Type-II right-censored, ordered statistics, record values and current records that observed from the past X-sequence. Such coverage probabilities of the prediction intervals are exact and don’t depend on the sampling distribution F. Finally, a real life time data were given to breakdown the insulating fluid between electrodes which is used to illustrate the derived results.
This article discusses the problem of predicting future progressive Type-II right censored order statistics based on progressive Type-II right-censored, ordered statistics, record values and current records that observed from the past X-sequence. Such coverage probabilities of the prediction intervals are exact and don’t depend on the sampling distribution F. Finally, a real life time data were given to breakdown the insulating fluid between electrodes which is used to illustrate the derived results.
Prediction Intervals for Progressive Type-II Right-Censored Order Statistics from Two Independent Sequences
doi:10.11648/j.ajtas.20150405.13
American Journal of Theoretical and Applied Statistics
2015-08-03
© Science Publishing Group
M. M. Mohie El-Din
M. S. Kotb
W. S. Emam
Prediction Intervals for Progressive Type-II Right-Censored Order Statistics from Two Independent Sequences
4
5
338
338
2015-08-03
2015-08-03
10.11648/j.ajtas.20150405.13
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150405.13
© Science Publishing Group
Stochastic Modeling of a System with Maintenance and Replacement of Standby Subject to Inspection
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150405.14
The present paper develops a probabilistic model of a cold standby system considering the failure of unit in standby mode. Initially the model contains one unit in operation and another identical in cold standby mode. The unit in cold standby mode fails after passage of pre specified time and goes under inspection for feasibility check for maintenance or replacement, whereas the operative unit directly goes under repair at its failure. A single service facility available in the system handles the tasks of repair, inspection, maintenance or replacement. The replacement of unit in standby mode, at its failure, takes some time; that follows certain probability distribution. The theory of semi-Markov processes and regenerative point technique are used to develop and analyze the system model. For illustration, the results are obtained for a particular case.
The present paper develops a probabilistic model of a cold standby system considering the failure of unit in standby mode. Initially the model contains one unit in operation and another identical in cold standby mode. The unit in cold standby mode fails after passage of pre specified time and goes under inspection for feasibility check for maintenance or replacement, whereas the operative unit directly goes under repair at its failure. A single service facility available in the system handles the tasks of repair, inspection, maintenance or replacement. The replacement of unit in standby mode, at its failure, takes some time; that follows certain probability distribution. The theory of semi-Markov processes and regenerative point technique are used to develop and analyze the system model. For illustration, the results are obtained for a particular case.
Stochastic Modeling of a System with Maintenance and Replacement of Standby Subject to Inspection
doi:10.11648/j.ajtas.20150405.14
American Journal of Theoretical and Applied Statistics
2015-08-05
© Science Publishing Group
R. K. Bhardwaj
Komaldeep Kaur
S. C. Malik
Stochastic Modeling of a System with Maintenance and Replacement of Standby Subject to Inspection
4
5
346
346
2015-08-05
2015-08-05
10.11648/j.ajtas.20150405.14
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150405.14
© Science Publishing Group
Spatial Modelling of Disparity in Economic Activity and Unemployment in Southern and Oromia Regional States of Ethiopia
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150405.15
Growth of productivity is the precondition to improve the living standard of people and maintain competitiveness in the globalized economy. However, wide regional deferential in labor force implies inefficiency as whole and might affect both aggregate unemployment and national output. The basic goal of this study was to model disparity in economic activity and unemployment in Southern and Oromia Regional States of Ethiopia, by incorporating spatial effects. Population and Housing Census data for 381 districts were used. The exploratory spatial data analysis, OLS regression model, and spatial econometric models were employed. The exploratory spatial data analysis results revealed that both economic activity and unemployment rates in a given district were directly affected by those of its neighbors. Economic activity and unemployment rates for males and females also spatially depended on that of neighboring districts. Spatial autocorrelations between unemployment and economic activity rates is negative. In modeling aspect, relying on specification diagnostics and measures of fit; spatial lag model was found to be the best model for modelling both economic activity and unemployment rates. The modelling results revealed that both estimates of spatial autoregressive parameters indicated the existence of spatial spillover in economic activity and unemployment rates. Spatial lag model analysis also demonstrated that average number of persons per household, crude birth rate, female and male unemployment rate were significant factors of economic activity rates. The factors, percentage of urban population, economic inactivity rate, percentage of self-employed population, percentage of unpaid family employers, and average number of persons per household were found as being factors behind disparities in unemployment rates across regions districts. In conclusion, as expected the economic activity and unemployment variables had the nature of correlation over space. It is recommended that most effective policy mix is required for stabilizing and alleviating disparity in both economic activities and unemployment of the districts considered in the regions.
Growth of productivity is the precondition to improve the living standard of people and maintain competitiveness in the globalized economy. However, wide regional deferential in labor force implies inefficiency as whole and might affect both aggregate unemployment and national output. The basic goal of this study was to model disparity in economic activity and unemployment in Southern and Oromia Regional States of Ethiopia, by incorporating spatial effects. Population and Housing Census data for 381 districts were used. The exploratory spatial data analysis, OLS regression model, and spatial econometric models were employed. The exploratory spatial data analysis results revealed that both economic activity and unemployment rates in a given district were directly affected by those of its neighbors. Economic activity and unemployment rates for males and females also spatially depended on that of neighboring districts. Spatial autocorrelations between unemployment and economic activity rates is negative. In modeling aspect, relying on specification diagnostics and measures of fit; spatial lag model was found to be the best model for modelling both economic activity and unemployment rates. The modelling results revealed that both estimates of spatial autoregressive parameters indicated the existence of spatial spillover in economic activity and unemployment rates. Spatial lag model analysis also demonstrated that average number of persons per household, crude birth rate, female and male unemployment rate were significant factors of economic activity rates. The factors, percentage of urban population, economic inactivity rate, percentage of self-employed population, percentage of unpaid family employers, and average number of persons per household were found as being factors behind disparities in unemployment rates across regions districts. In conclusion, as expected the economic activity and unemployment variables had the nature of correlation over space. It is recommended that most effective policy mix is required for stabilizing and alleviating disparity in both economic activities and unemployment of the districts considered in the regions.
Spatial Modelling of Disparity in Economic Activity and Unemployment in Southern and Oromia Regional States of Ethiopia
doi:10.11648/j.ajtas.20150405.15
American Journal of Theoretical and Applied Statistics
2015-08-20
© Science Publishing Group
Berisha Mekayhu Gelebo
Ayele Taye Goshu
Spatial Modelling of Disparity in Economic Activity and Unemployment in Southern and Oromia Regional States of Ethiopia
4
5
358
358
2015-08-20
2015-08-20
10.11648/j.ajtas.20150405.15
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150405.15
© Science Publishing Group
Modeling the Impact of Crude Oil Price Shocks on Some Macroeconomic Variables in Nigeria Using Garch and VAR Models
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150405.16
This study investigated the impact of crude oil shocks (COP) on exchange rate (EXCHR), external reserves (EXRS), gross domestic product (GDP), inflation rate (INFL), international trade (INTR) and money supply (MSUP) in Nigeria with a quarterly data from 2000 to 2014 using GARCH and VAR models. From the analysis, all the variables were stationary at first difference with p-value less than 0.05. The presence of heteroscedasticity was found in exchange rate with most of its coefficient models being significant at 5% level and the forecasting model for exchange rate is GARCH (2, 1). Crude oil shocks did not pose significant inflationary threat to the Nigerian economy in the short run; rather, it improves the level of gross domestic product. However, external reserves and international trade were significantly affected due to the recent fall in crude oil export. Oil shocks also positively affected money supply showing that monetary policy response to oil price changes; at the same time, money supply did affect GDP. These show that a diversified economy is really needed
This study investigated the impact of crude oil shocks (COP) on exchange rate (EXCHR), external reserves (EXRS), gross domestic product (GDP), inflation rate (INFL), international trade (INTR) and money supply (MSUP) in Nigeria with a quarterly data from 2000 to 2014 using GARCH and VAR models. From the analysis, all the variables were stationary at first difference with p-value less than 0.05. The presence of heteroscedasticity was found in exchange rate with most of its coefficient models being significant at 5% level and the forecasting model for exchange rate is GARCH (2, 1). Crude oil shocks did not pose significant inflationary threat to the Nigerian economy in the short run; rather, it improves the level of gross domestic product. However, external reserves and international trade were significantly affected due to the recent fall in crude oil export. Oil shocks also positively affected money supply showing that monetary policy response to oil price changes; at the same time, money supply did affect GDP. These show that a diversified economy is really needed
Modeling the Impact of Crude Oil Price Shocks on Some Macroeconomic Variables in Nigeria Using Garch and VAR Models
doi:10.11648/j.ajtas.20150405.16
American Journal of Theoretical and Applied Statistics
2015-08-20
© Science Publishing Group
Audu Isah
Husseini Garba Dikko
Ejiemenu Sarah Chinyere
Modeling the Impact of Crude Oil Price Shocks on Some Macroeconomic Variables in Nigeria Using Garch and VAR Models
4
5
367
367
2015-08-20
2015-08-20
10.11648/j.ajtas.20150405.16
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150405.16
© Science Publishing Group
Compare and Evaluate the Performance of Gaussian Spatial Regression Models and Skew Gaussian Spatial Regression Based on Kernel Averaged Predictors
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150405.17
In many problems in the field of spatial statistics, when modeling the trend functions, predictors or covariates are available and the goal is to build a regression model to describe the relationship between the response and predictors. Generally, in spatial regression models, the trend function is often linear and it is assumed that the response mean is a linear function of predictor values in the same location where the response variable is observed. But, in real applications, the neighboring predictors sometimes provide valuable information about the response variable particulary when the distance between the locations is small. Having considered this subject matter, Heaton and Gelfand [6] suggested using kernel averaged predictors for modeling trend functions in which neighboring predictor information are also used. The models proposed by Heaton an Gelfand seemed to be bound by data normality. So, in many more application problems, spatial response variables follow a skew distribution. Therefore, in this article, skew Gaussian spatial regression model is studied and the performance of the model is presented and evaluated in comparison with Gaussian spatial regression models based on kernel averaged predictors using simulation studies and real examples
In many problems in the field of spatial statistics, when modeling the trend functions, predictors or covariates are available and the goal is to build a regression model to describe the relationship between the response and predictors. Generally, in spatial regression models, the trend function is often linear and it is assumed that the response mean is a linear function of predictor values in the same location where the response variable is observed. But, in real applications, the neighboring predictors sometimes provide valuable information about the response variable particulary when the distance between the locations is small. Having considered this subject matter, Heaton and Gelfand [6] suggested using kernel averaged predictors for modeling trend functions in which neighboring predictor information are also used. The models proposed by Heaton an Gelfand seemed to be bound by data normality. So, in many more application problems, spatial response variables follow a skew distribution. Therefore, in this article, skew Gaussian spatial regression model is studied and the performance of the model is presented and evaluated in comparison with Gaussian spatial regression models based on kernel averaged predictors using simulation studies and real examples
Compare and Evaluate the Performance of Gaussian Spatial Regression Models and Skew Gaussian Spatial Regression Based on Kernel Averaged Predictors
doi:10.11648/j.ajtas.20150405.17
American Journal of Theoretical and Applied Statistics
2015-08-20
© Science Publishing Group
Somayeh Shahraki Dehsoukhteh
Compare and Evaluate the Performance of Gaussian Spatial Regression Models and Skew Gaussian Spatial Regression Based on Kernel Averaged Predictors
4
5
372
372
2015-08-20
2015-08-20
10.11648/j.ajtas.20150405.17
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150405.17
© Science Publishing Group
An Application of Geostatistics to Analysis of Water Quality Parameters in Rivers and Streams in Niger State, Nigeria
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150405.18
Assessment of surface water quality using multivariate statistical techniques does not incorporate the spatial locations of data into their defining computations. Information on spatial continuity of surface water concentrations can help in identifying the magnitude of contamination by runoff and anthropogenic pollutions. In the present study, spatial behavior of five (5) surface water quality parameters of some rivers/streams in Niger State of Nigeria was studied using R geostatistical package <i>gstat</i>, in conjunction with packages <i>sp, rgdal, spatstat and maptools</i>. The variograms and ordinary krigged spatial maps were generated for rainy and dry seasons. The characteristics of the best variable models; range; sill and nugget effects of each parameter were obtained. The variogram analysis indicated a high spatial coherence for E.co, Mg and TDS, whereas TCo and TH indicated a low spatial coherence. The nugget to sill ratios of experimental and linear fitted variogram models in all cases were less than 0.25 indicating that the rivers/streams water level has strong spatial coherence in both seasons. This result shows that linear model is the best for both seasons. Krigged spatial variability maps revealed that an average range of 48km variograms for dry season changes more rapidly than it does in rainy season with an average range of 4.3 km and R2 values of 0.80 to 0.92.
Assessment of surface water quality using multivariate statistical techniques does not incorporate the spatial locations of data into their defining computations. Information on spatial continuity of surface water concentrations can help in identifying the magnitude of contamination by runoff and anthropogenic pollutions. In the present study, spatial behavior of five (5) surface water quality parameters of some rivers/streams in Niger State of Nigeria was studied using R geostatistical package <i>gstat</i>, in conjunction with packages <i>sp, rgdal, spatstat and maptools</i>. The variograms and ordinary krigged spatial maps were generated for rainy and dry seasons. The characteristics of the best variable models; range; sill and nugget effects of each parameter were obtained. The variogram analysis indicated a high spatial coherence for E.co, Mg and TDS, whereas TCo and TH indicated a low spatial coherence. The nugget to sill ratios of experimental and linear fitted variogram models in all cases were less than 0.25 indicating that the rivers/streams water level has strong spatial coherence in both seasons. This result shows that linear model is the best for both seasons. Krigged spatial variability maps revealed that an average range of 48km variograms for dry season changes more rapidly than it does in rainy season with an average range of 4.3 km and R2 values of 0.80 to 0.92.
An Application of Geostatistics to Analysis of Water Quality Parameters in Rivers and Streams in Niger State, Nigeria
doi:10.11648/j.ajtas.20150405.18
American Journal of Theoretical and Applied Statistics
2015-08-27
© Science Publishing Group
Isah Audu
Abdullahi Usman
An Application of Geostatistics to Analysis of Water Quality Parameters in Rivers and Streams in Niger State, Nigeria
4
5
388
388
2015-08-27
2015-08-27
10.11648/j.ajtas.20150405.18
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150405.18
© Science Publishing Group
Identifying Factors for Marriage Breakdown at Debre Birhan Town of Ethiopia: Logistic and Survival Analysis
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150405.19
Marriage breakdown is a condition in which partners of a marital union cease to live together especially due to divorce or separation. The main objective of this study is identifying factors for marriage breakdown. To achieve this sample of 576 respondents was taken using stratified random sampling method, during March 2012. From descriptive statistics we have seen that about 41.7% of the first marriage was broken in Debre Birhan town. A series of statistical analysis have done: factors for marriage breakdown were analyzed using binary logistic regression and time to marriage breakdown was analyzed by Cox proportional hazard model. From the binary logistic regression we have seen that being infertile, marry at age of 12-18 years (early marriage), sexual incompatibility, unfaithfulness, absence of discussion and illiterate husbands are exposed to the risk of marriage break down. From the Cox proportional hazard model we have seen that; spouses who are infertile, marry b/n 12-18 years for females, too low (<4 years) or too high (>10 years) age gap, having different religion, sexual incompatibility and unfaithfulness leads to the shorter survival time of first marriage. Finally we have recommend that Spouses should have a habit of discussion, specially on sexual issue, youth should insure that they have the potential to pursue marriage its responsibility before coming to the institution. Awareness creation and counseling service should have given about the effect of early marriage, the importance of legal- marriage, impact of religion difference of spouses and gender equality.
Marriage breakdown is a condition in which partners of a marital union cease to live together especially due to divorce or separation. The main objective of this study is identifying factors for marriage breakdown. To achieve this sample of 576 respondents was taken using stratified random sampling method, during March 2012. From descriptive statistics we have seen that about 41.7% of the first marriage was broken in Debre Birhan town. A series of statistical analysis have done: factors for marriage breakdown were analyzed using binary logistic regression and time to marriage breakdown was analyzed by Cox proportional hazard model. From the binary logistic regression we have seen that being infertile, marry at age of 12-18 years (early marriage), sexual incompatibility, unfaithfulness, absence of discussion and illiterate husbands are exposed to the risk of marriage break down. From the Cox proportional hazard model we have seen that; spouses who are infertile, marry b/n 12-18 years for females, too low (<4 years) or too high (>10 years) age gap, having different religion, sexual incompatibility and unfaithfulness leads to the shorter survival time of first marriage. Finally we have recommend that Spouses should have a habit of discussion, specially on sexual issue, youth should insure that they have the potential to pursue marriage its responsibility before coming to the institution. Awareness creation and counseling service should have given about the effect of early marriage, the importance of legal- marriage, impact of religion difference of spouses and gender equality.
Identifying Factors for Marriage Breakdown at Debre Birhan Town of Ethiopia: Logistic and Survival Analysis
doi:10.11648/j.ajtas.20150405.19
American Journal of Theoretical and Applied Statistics
2015-09-07
© Science Publishing Group
Getahun Mulugeta
Identifying Factors for Marriage Breakdown at Debre Birhan Town of Ethiopia: Logistic and Survival Analysis
4
5
395
395
2015-09-07
2015-09-07
10.11648/j.ajtas.20150405.19
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150405.19
© Science Publishing Group
Estimation of Population Total Using Spline Functions
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150405.20
This study sought to estimate finite population total using spline functions. The emerging patterns from spline smoother were compared with those that were obtained from the model-based, the model-assisted and the non-parametric estimators. To measure the performance of each estimator, three aspects were considered: the average bias, the efficiency by use of the average mean square error and the robustness using the rate of change of efficiency. We used six populations: four natural and two simulated. The findings showed that the model-based estimator works very well in terms of efficiency while the model-assisted is almost unbiased when the model is linear and homoscedastic. However, the estimators break down when the underlying model assumptions are violated. The Kernel Estimator (Nadaraya-Watson) is found to be the most robust of the five estimators considered. Between the two spline functions that we considered, the periodic spline was found to perform better. The spline functions were found to provide good results whether or not the design points were uniformly spaced. We also found out that, under certain conditions, a smoothing spline estimator and a Kernel estimator are equivalent. The study recommends that both the ratio estimator and the local polynomial estimator should be used within the confines of a linear homoscedastic model. The Nadaraya-Watson and the periodic spline estimators, both of which are non-parametric, are highly robust. The Nadaraya-Watson however is even more robust than the periodic spline.
This study sought to estimate finite population total using spline functions. The emerging patterns from spline smoother were compared with those that were obtained from the model-based, the model-assisted and the non-parametric estimators. To measure the performance of each estimator, three aspects were considered: the average bias, the efficiency by use of the average mean square error and the robustness using the rate of change of efficiency. We used six populations: four natural and two simulated. The findings showed that the model-based estimator works very well in terms of efficiency while the model-assisted is almost unbiased when the model is linear and homoscedastic. However, the estimators break down when the underlying model assumptions are violated. The Kernel Estimator (Nadaraya-Watson) is found to be the most robust of the five estimators considered. Between the two spline functions that we considered, the periodic spline was found to perform better. The spline functions were found to provide good results whether or not the design points were uniformly spaced. We also found out that, under certain conditions, a smoothing spline estimator and a Kernel estimator are equivalent. The study recommends that both the ratio estimator and the local polynomial estimator should be used within the confines of a linear homoscedastic model. The Nadaraya-Watson and the periodic spline estimators, both of which are non-parametric, are highly robust. The Nadaraya-Watson however is even more robust than the periodic spline.
Estimation of Population Total Using Spline Functions
doi:10.11648/j.ajtas.20150405.20
American Journal of Theoretical and Applied Statistics
2015-09-09
© Science Publishing Group
Gladys Gakenia Njoroge
Estimation of Population Total Using Spline Functions
4
5
403
403
2015-09-09
2015-09-09
10.11648/j.ajtas.20150405.20
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150405.20
© Science Publishing Group
Determination of Infant and Child Mortality in Kenya Using Cox-Proportional Hazard Model
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150405.21
One of the Millennium Development Goals is the reduction of infant and child mortality by two-thirds by year 2015. To achieve this goal, efforts need be concentrated at identifying cost-effective strategies as many international agencies have advocated for more resources to be directed to health sector. One way of doing this is to identify the important factors that affect infant and child mortality. This study is necessary because, Infant and child mortality is one of the most important sensitive indicators of the social economic and health status of a community. This is because more than any other age group of a population, infants and children survival depends on the socioeconomic condition of their environment. This study addresses factors affecting infant and child mortality in Kenya. The main objective of the paper is to determine the effect of socioeconomic and demographic variables on infant and child mortality. Childhood mortality from the, KDHS 2008-09 data, was analyzed in two age periods: mortality from birth to the age of 12 months, referred to as “infant mortality” and mortality from the age of 12 months to the age of 60 months, referred to as “child mortality”. Data from Kenya Demographic and Health Survey (KDHS 2008-09) was collected by use of questionnaires, after carrying out a two-stage cluster sampling design. The Cox regression survival analysis was used to compute relative risk of the socioeconomic and demographic variables, on infant and child mortality. The study revealed that the socioeconomic and demographic factors affect both infant and child mortality. The relative risks were higher for infant’s mortality as compared to child’s mortality. The place of birth has the greatest impact on infant mortality. The study recommends policy makers and programme managers in the child health sector to formulate appropriate strategies to improve the situation, of children less than five years in Kenya, by creating awareness on these factors and improving on them.
One of the Millennium Development Goals is the reduction of infant and child mortality by two-thirds by year 2015. To achieve this goal, efforts need be concentrated at identifying cost-effective strategies as many international agencies have advocated for more resources to be directed to health sector. One way of doing this is to identify the important factors that affect infant and child mortality. This study is necessary because, Infant and child mortality is one of the most important sensitive indicators of the social economic and health status of a community. This is because more than any other age group of a population, infants and children survival depends on the socioeconomic condition of their environment. This study addresses factors affecting infant and child mortality in Kenya. The main objective of the paper is to determine the effect of socioeconomic and demographic variables on infant and child mortality. Childhood mortality from the, KDHS 2008-09 data, was analyzed in two age periods: mortality from birth to the age of 12 months, referred to as “infant mortality” and mortality from the age of 12 months to the age of 60 months, referred to as “child mortality”. Data from Kenya Demographic and Health Survey (KDHS 2008-09) was collected by use of questionnaires, after carrying out a two-stage cluster sampling design. The Cox regression survival analysis was used to compute relative risk of the socioeconomic and demographic variables, on infant and child mortality. The study revealed that the socioeconomic and demographic factors affect both infant and child mortality. The relative risks were higher for infant’s mortality as compared to child’s mortality. The place of birth has the greatest impact on infant mortality. The study recommends policy makers and programme managers in the child health sector to formulate appropriate strategies to improve the situation, of children less than five years in Kenya, by creating awareness on these factors and improving on them.
Determination of Infant and Child Mortality in Kenya Using Cox-Proportional Hazard Model
doi:10.11648/j.ajtas.20150405.21
American Journal of Theoretical and Applied Statistics
2015-09-11
© Science Publishing Group
Daniel Mwangi Muriithi
Dennis K. Muriithi
Determination of Infant and Child Mortality in Kenya Using Cox-Proportional Hazard Model
4
5
413
413
2015-09-11
2015-09-11
10.11648/j.ajtas.20150405.21
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=146&doi=10.11648/j.ajtas.20150405.21
© Science Publishing Group