Science Publishing Group: Science Journal of Applied Mathematics and Statistics: Table of Contents
<i> Science Journal of Applied Mathematics and Statistics (SJAMS) </i> is a peer-reviewed journal on all areas of applied mathematics and statistics that intends to solve problems in engineering, sciences, and business through mathematical, computational and statistical methods. Topics of interest include, but are not limited to: applied mechanics, approximation theory, computational simulation, control, differential equations, dynamics, inverse problems, modeling, numerical analysis, optimization, probabilistic and statistical methods, stochastic processes. Research papers may be in any area of applied mathematics and statistics, with special emphasis on new mathematical ideas, relevant to modelling and analysis in modern science and technology, and the development of interesting mathematical methods of wide applicability.
http://www.sciencepublishinggroup.com/j/sjams Science Publishing Group: Science Journal of Applied Mathematics and Statistics: Table of Contents
Science Publishing Group
en-US
Science Journal of Applied Mathematics and Statistics
Science Journal of Applied Mathematics and Statistics
http://image.sciencepublishinggroup.com/journal/149.gif
http://www.sciencepublishinggroup.com/j/sjams
Non-local Boundary Condition Steklov Problem for A Laplace Equation in Bounded Domain
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20130101.11
In classic mathematical physics course, under the local boundary conditions the Dirichlet (first kind), Neuman (second kind) and at last special case of Poincare (third kind) boundary value problems were considered for a Laplace equation being a canonic form of elliptic type of equations. Later on for a Laplace equation, under non-local boundary condition the Steklov problem was investigated in [3] and a sufficient condition for Fredholm property was found. Note that here boundary conditions contains non -local and global terms and the investigation method consist of obtaining necessary conditions, regularization of them and reducing the stated boundary problem to the system of second kind Fredholm integral equation with non-singular kernel.
In classic mathematical physics course, under the local boundary conditions the Dirichlet (first kind), Neuman (second kind) and at last special case of Poincare (third kind) boundary value problems were considered for a Laplace equation being a canonic form of elliptic type of equations. Later on for a Laplace equation, under non-local boundary condition the Steklov problem was investigated in [3] and a sufficient condition for Fredholm property was found. Note that here boundary conditions contains non -local and global terms and the investigation method consist of obtaining necessary conditions, regularization of them and reducing the stated boundary problem to the system of second kind Fredholm integral equation with non-singular kernel.
Non-local Boundary Condition Steklov Problem for A Laplace Equation in Bounded Domain
doi:10.11648/j.sjams.20130101.11
Science Journal of Applied Mathematics and Statistics
2014-01-01
© Science Publishing Group
Aliev Nehan Ali
Abbasova Aygun Khanlar
Zeynalov Ramin M.
Non-local Boundary Condition Steklov Problem for A Laplace Equation in Bounded Domain
1
1
6
6
2014-01-01
2014-01-01
10.11648/j.sjams.20130101.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20130101.11
© Science Publishing Group
Use of Multistage Optimisation Technique in Formulation of Strategies to Reduce Customer Churn Problem Facing Internet Operators in Zimbabwe
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20130102.11
Internet is now considered indispensible in Zimbabwe's economic and social structures and world over. Jostling of operators to provide internet services has seen competition rise in Zimbabwe, leading to customer churn. The problem has tossed so many telecoms companies. Customer churn refers to the propensity to cease doing business with a company in a given time period. Companies are struggling to keep their customers from defecting. What then needs to be done? Formulation of robust set of strategies may be the solution to this headache. This paper seeks to come up with a model for strategy formulation, in order to establish strategies that reduce customer churn. The SWOT analysis, AHP and Linear Programming model were used to find the optimal strategy that would be implemented by the case study company, so as to reduce customer churn. The paper not only demonstrates strategy formulation using the model, but enlighten on the need to use quantitative analysis in strategy formulation.
Internet is now considered indispensible in Zimbabwe's economic and social structures and world over. Jostling of operators to provide internet services has seen competition rise in Zimbabwe, leading to customer churn. The problem has tossed so many telecoms companies. Customer churn refers to the propensity to cease doing business with a company in a given time period. Companies are struggling to keep their customers from defecting. What then needs to be done? Formulation of robust set of strategies may be the solution to this headache. This paper seeks to come up with a model for strategy formulation, in order to establish strategies that reduce customer churn. The SWOT analysis, AHP and Linear Programming model were used to find the optimal strategy that would be implemented by the case study company, so as to reduce customer churn. The paper not only demonstrates strategy formulation using the model, but enlighten on the need to use quantitative analysis in strategy formulation.
Use of Multistage Optimisation Technique in Formulation of Strategies to Reduce Customer Churn Problem Facing Internet Operators in Zimbabwe
doi:10.11648/j.sjams.20130102.11
Science Journal of Applied Mathematics and Statistics
2014-01-01
© Science Publishing Group
Ndava Constantine Mupondo
Brain Kusotera
Desmond Mwembe
Susan Maposa
Use of Multistage Optimisation Technique in Formulation of Strategies to Reduce Customer Churn Problem Facing Internet Operators in Zimbabwe
1
2
24
24
2014-01-01
2014-01-01
10.11648/j.sjams.20130102.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20130102.11
© Science Publishing Group
Uniform Asymptotic Solutions of the Cauchy Problem for a Generalized Model Equation of L.S.Pontryagin in the Case of Violation of Conditions of Asymptotic Stability
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20130103.11
Here, we construct a uniform asymptotic solution of the Cauchy problem of the small parameter for the inhomogeneous differential equation with small parameter at the derivative, when the linear part of the equation is a pure complex, with its real part changes from negative to positive when one going from the left half to the right half plane.
Here, we construct a uniform asymptotic solution of the Cauchy problem of the small parameter for the inhomogeneous differential equation with small parameter at the derivative, when the linear part of the equation is a pure complex, with its real part changes from negative to positive when one going from the left half to the right half plane.
Uniform Asymptotic Solutions of the Cauchy Problem for a Generalized Model Equation of L.S.Pontryagin in the Case of Violation of Conditions of Asymptotic Stability
doi:10.11648/j.sjams.20130103.11
Science Journal of Applied Mathematics and Statistics
2014-01-01
© Science Publishing Group
Dilmurat Tursunov
Uniform Asymptotic Solutions of the Cauchy Problem for a Generalized Model Equation of L.S.Pontryagin in the Case of Violation of Conditions of Asymptotic Stability
1
3
29
29
2014-01-01
2014-01-01
10.11648/j.sjams.20130103.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20130103.11
© Science Publishing Group
Impact and Treatment of the Evaluators’ Effect on Employees’ Performance Appraisal
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20130104.11
Putting a performance appraisal scheme is important to assess the gap between best performer and least performer employees. Employees who want to improve their work efficiency can then be rewarded, where as corrective action can be taken against those employees who don’t want to improve their performance. The objective of this study is to construct a technique that helps to evaluate the subjective effect that a given evaluator’s assessment will have a certain impact on the performance appraisal of a given employee, assuming that an assessment of one’s work performance will have to be undertaken by an evaluator and that this assessment is essentially a subjective one. For this study, a linear mixed modeling approach will be applied to show significant evaluator’s effect on a certain employees that needs to be properly accounted for when rewarding employees. With this adjustment being done, any incentive scheme, whether its motive is reward based or penalty fail in its intended purpose of improving employees’ overall performance.
Putting a performance appraisal scheme is important to assess the gap between best performer and least performer employees. Employees who want to improve their work efficiency can then be rewarded, where as corrective action can be taken against those employees who don’t want to improve their performance. The objective of this study is to construct a technique that helps to evaluate the subjective effect that a given evaluator’s assessment will have a certain impact on the performance appraisal of a given employee, assuming that an assessment of one’s work performance will have to be undertaken by an evaluator and that this assessment is essentially a subjective one. For this study, a linear mixed modeling approach will be applied to show significant evaluator’s effect on a certain employees that needs to be properly accounted for when rewarding employees. With this adjustment being done, any incentive scheme, whether its motive is reward based or penalty fail in its intended purpose of improving employees’ overall performance.
Impact and Treatment of the Evaluators’ Effect on Employees’ Performance Appraisal
doi:10.11648/j.sjams.20130104.11
Science Journal of Applied Mathematics and Statistics
2014-01-01
© Science Publishing Group
Awoke Seyoum Tegegne
Impact and Treatment of the Evaluators’ Effect on Employees’ Performance Appraisal
1
4
37
37
2014-01-01
2014-01-01
10.11648/j.sjams.20130104.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20130104.11
© Science Publishing Group
A New Method for the Model Selection in B-Spline Surface Approximation with an Influence Function
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20130105.11
In model selection, the most effective method requires much time.The analysis of the bivariate B-spline model with a penalized term has many difficulties.It has many factors and parameters such the number of the knots, the locations of those knots, number of B-spline functions and the value of the smoothing parameter of the penalized term.For the determination of the model we have to compare a large amount of the combinations of those parameters. Various information criteria are considered and the cross validation (CV) criterion is excellent but it requires a large amount of computational costs. The effect of the influence function and the techniques of the generalized cross validation (CV) are considered. The influence function is related to the first term of a Taylor expansion. Some alternative methods are tested and a new method is proposed. For the verification of this method theoretical proof and the computational results are shown.
In model selection, the most effective method requires much time.The analysis of the bivariate B-spline model with a penalized term has many difficulties.It has many factors and parameters such the number of the knots, the locations of those knots, number of B-spline functions and the value of the smoothing parameter of the penalized term.For the determination of the model we have to compare a large amount of the combinations of those parameters. Various information criteria are considered and the cross validation (CV) criterion is excellent but it requires a large amount of computational costs. The effect of the influence function and the techniques of the generalized cross validation (CV) are considered. The influence function is related to the first term of a Taylor expansion. Some alternative methods are tested and a new method is proposed. For the verification of this method theoretical proof and the computational results are shown.
A New Method for the Model Selection in B-Spline Surface Approximation with an Influence Function
doi:10.11648/j.sjams.20130105.11
Science Journal of Applied Mathematics and Statistics
2014-01-01
© Science Publishing Group
Hongmei Bao
Kaoru Fueda
A New Method for the Model Selection in B-Spline Surface Approximation with an Influence Function
1
5
46
46
2014-01-01
2014-01-01
10.11648/j.sjams.20130105.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20130105.11
© Science Publishing Group
Symmetry Analysis to f'''+βff''-αf'2=0 Arising in Boundary Layer Theory
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20130105.12
In this paper we analyze the boundary layer equation f^'''+βff^''-α〖f'〗^2=0 using a group theoretical method known as symmetry method. We obtain the symmetry group admitted by the boundary layer equation. We then construct exact invariant solutions and outline a symmetry reduction. The invariant solution is examined under common boundary conditions.
In this paper we analyze the boundary layer equation f^'''+βff^''-α〖f'〗^2=0 using a group theoretical method known as symmetry method. We obtain the symmetry group admitted by the boundary layer equation. We then construct exact invariant solutions and outline a symmetry reduction. The invariant solution is examined under common boundary conditions.
Symmetry Analysis to f'''+βff''-αf'2=0 Arising in Boundary Layer Theory
doi:10.11648/j.sjams.20130105.12
Science Journal of Applied Mathematics and Statistics
2014-01-01
© Science Publishing Group
Salma Mohammad Al-Tuwairqi
Anisa Mukhtar Hassan
Symmetry Analysis to f'''+βff''-αf'2=0 Arising in Boundary Layer Theory
1
5
49
49
2014-01-01
2014-01-01
10.11648/j.sjams.20130105.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20130105.12
© Science Publishing Group
Moments of Continuous Bi-Variate Distributions: An Alternative Approach
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20130105.15
We propose a method of obtaining the moment of some continuous bi-variate distributions with parameters α1 β1 andα2 β2 in finding the nth moment of the variable x^c y^d (c≥0, d≥0) where X and Y are continuous random variables having the joint pdf, f(x,y).Here we find the so called gn(c, d)defined gn(c, d)= E(X^cY^d+λ)^n, the nth moment of expected value of the t distribution of the cth power of X and dth power of Y about the constant λ.These moments are obtained by the use of bi-variate moment generating functions, when they exist. The proposed gn(c, d) is illustrated with some continuous bi-variate distributions and is shown to be easy to use even when the powers of the random variables being considered are non-negative real numbers that need not be integers. The results obtained using gn(c, d) are the same as results obtained using other methods such as moment generating functions when they exist.
We propose a method of obtaining the moment of some continuous bi-variate distributions with parameters α1 β1 andα2 β2 in finding the nth moment of the variable x^c y^d (c≥0, d≥0) where X and Y are continuous random variables having the joint pdf, f(x,y).Here we find the so called gn(c, d)defined gn(c, d)= E(X^cY^d+λ)^n, the nth moment of expected value of the t distribution of the cth power of X and dth power of Y about the constant λ.These moments are obtained by the use of bi-variate moment generating functions, when they exist. The proposed gn(c, d) is illustrated with some continuous bi-variate distributions and is shown to be easy to use even when the powers of the random variables being considered are non-negative real numbers that need not be integers. The results obtained using gn(c, d) are the same as results obtained using other methods such as moment generating functions when they exist.
Moments of Continuous Bi-Variate Distributions: An Alternative Approach
doi:10.11648/j.sjams.20130105.15
Science Journal of Applied Mathematics and Statistics
2014-01-01
© Science Publishing Group
Oyeka ICA
Okeh UM
Moments of Continuous Bi-Variate Distributions: An Alternative Approach
1
5
69
69
2014-01-01
2014-01-01
10.11648/j.sjams.20130105.15
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20130105.15
© Science Publishing Group
Research on the Construction of Supply Chain Network Based on Fixed Spread Risk
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20130105.13
Supply chain is an integrated network where suppliers, sellers and customers are highly interconnected through business activities. The basic character of supply chain is complexity, dynamic and growth. In this paper, based on the complex network theory, we build a supply chain network model based on gradual risk. The dynamic growth mechanism of the supply chain network is based on the local world evolving model. The new node adds the local world entering with a certain probability. Combined with supply chain, the node, the edge and the weighted and the network are being redefined. The process of how new added node assign risk to old nodes is also introduced
Supply chain is an integrated network where suppliers, sellers and customers are highly interconnected through business activities. The basic character of supply chain is complexity, dynamic and growth. In this paper, based on the complex network theory, we build a supply chain network model based on gradual risk. The dynamic growth mechanism of the supply chain network is based on the local world evolving model. The new node adds the local world entering with a certain probability. Combined with supply chain, the node, the edge and the weighted and the network are being redefined. The process of how new added node assign risk to old nodes is also introduced
Research on the Construction of Supply Chain Network Based on Fixed Spread Risk
doi:10.11648/j.sjams.20130105.13
Science Journal of Applied Mathematics and Statistics
2014-01-01
© Science Publishing Group
Lei Wen
Yachao Shi
Mingfang Guo
Research on the Construction of Supply Chain Network Based on Fixed Spread Risk
1
5
53
53
2014-01-01
2014-01-01
10.11648/j.sjams.20130105.13
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20130105.13
© Science Publishing Group
Prediction of Academic Manpower System of a Polytechnic Institution in Nigeria
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20130105.14
This study is on makovian approach in studying the behaviour of the academic staff grade transition of a Polytechnic institution in Nigeria. The objective of this study is to determine the proportion of staff recruited, promoted and withdrawn from the various grade levels in the institution over the years and also forecast the expected manpower structure of the institution for 2014/2015 session. Secondary data obtained from the Personnel Department of Delta State Polytechnic, Oghara for 2006/07 – 2011/12 sessions was used to evaluate the method. The findings from the analysis showed that the academic staff grade transition flow is stationary over the observed time period. Also, it was predicted that at the beginning of the 2014/15 session, that the expected staff structure of Delta State Polytechnic Otefe-Oghara will consist of 2 Assistant Lecturer’s, 15 Lecturer’s III, 11 Lecturer’s II, 12 Lecturer’s I, 9 Senior Lecturer’s, 7 Principal Lecturer’s and 7 Chief Lecturer’s; if the current recruitment and promotion policies in the institution remain unchanged. The predicted distributed structure of academic staff of Delta state Polytechnic for 2014/15 session was observed to be approximately normally distributed.
This study is on makovian approach in studying the behaviour of the academic staff grade transition of a Polytechnic institution in Nigeria. The objective of this study is to determine the proportion of staff recruited, promoted and withdrawn from the various grade levels in the institution over the years and also forecast the expected manpower structure of the institution for 2014/2015 session. Secondary data obtained from the Personnel Department of Delta State Polytechnic, Oghara for 2006/07 – 2011/12 sessions was used to evaluate the method. The findings from the analysis showed that the academic staff grade transition flow is stationary over the observed time period. Also, it was predicted that at the beginning of the 2014/15 session, that the expected staff structure of Delta State Polytechnic Otefe-Oghara will consist of 2 Assistant Lecturer’s, 15 Lecturer’s III, 11 Lecturer’s II, 12 Lecturer’s I, 9 Senior Lecturer’s, 7 Principal Lecturer’s and 7 Chief Lecturer’s; if the current recruitment and promotion policies in the institution remain unchanged. The predicted distributed structure of academic staff of Delta state Polytechnic for 2014/15 session was observed to be approximately normally distributed.
Prediction of Academic Manpower System of a Polytechnic Institution in Nigeria
doi:10.11648/j.sjams.20130105.14
Science Journal of Applied Mathematics and Statistics
2014-01-01
© Science Publishing Group
Ogbogbo, G. O.
Ebuh, G. U.
Aronu, C. O.
Prediction of Academic Manpower System of a Polytechnic Institution in Nigeria
1
5
61
61
2014-01-01
2014-01-01
10.11648/j.sjams.20130105.14
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20130105.14
© Science Publishing Group
Non-Parametric Study of the Attitude of Civil Servants towards Made-in-Nigeria Goods (A Case Study of Owerri Urban, Imo State, Nigeria)
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20130105.16
This study tends to analyze the attitude of civil servants towards made-in-Nigeria goods in Owerri Urban, Imo State. It is an attempt to provide a general picture of the level of acceptability of these goods and ways of raising their quality standards so as to increase demand. The analysis using the Chi-square test showed that the opinion of civil servants on the prices of made in Nigeria goods are independent of sex, educational attainment, level of income and various ages of the civil servants. The analysis using the non-parametric Friedman test revealed that that there is at least a factor that is responsible for the low quality and less durability of made in Nigeria goods. The analysis using the non-parametric Kruskal Wallis test concluded that the suggested opinions are the ways of raising the quality standards of made in Nigeria goods so as to increase demand.
This study tends to analyze the attitude of civil servants towards made-in-Nigeria goods in Owerri Urban, Imo State. It is an attempt to provide a general picture of the level of acceptability of these goods and ways of raising their quality standards so as to increase demand. The analysis using the Chi-square test showed that the opinion of civil servants on the prices of made in Nigeria goods are independent of sex, educational attainment, level of income and various ages of the civil servants. The analysis using the non-parametric Friedman test revealed that that there is at least a factor that is responsible for the low quality and less durability of made in Nigeria goods. The analysis using the non-parametric Kruskal Wallis test concluded that the suggested opinions are the ways of raising the quality standards of made in Nigeria goods so as to increase demand.
Non-Parametric Study of the Attitude of Civil Servants towards Made-in-Nigeria Goods (A Case Study of Owerri Urban, Imo State, Nigeria)
doi:10.11648/j.sjams.20130105.16
Science Journal of Applied Mathematics and Statistics
2014-01-01
© Science Publishing Group
Iheagwara Andrew Ihuoma
Non-Parametric Study of the Attitude of Civil Servants towards Made-in-Nigeria Goods (A Case Study of Owerri Urban, Imo State, Nigeria)
1
5
81
81
2014-01-01
2014-01-01
10.11648/j.sjams.20130105.16
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20130105.16
© Science Publishing Group
Longitudinal Studies of Random Effect Model on Academic Performance of Undergraduate Students
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20130105.17
This paper discussed the longitudinal studies of random effect model on academic performance of student using Federal University of Technology, Owerri Imo State Nigeria as a case study. Secondary data were adopted for the research work, and a SAS software package was used for the analysis. There appears to be some curvature in the average trend and individual profile plots, and hence a quadratic time effect was fitted to the data. From the individual profiles are the total observations collected for the analysis. From the profiles of the type of SSA, Entry Age, Entry Aggregate and Gender, it could be assumed that each profiles evolution follows a quadratic trend. Also, it could be concluded that most students who started with low GPA at semester one, improved in their performance to semester three and there was a downward trend before semester seven. Further, the mean profile for SSA was explored. From the chosen model among all models fitted to the data set, we conclude based on the results obtained that student’s GPA depends on the SSA, Entry Age, Entry Aggregate and Gender). Student with high and medium admission aggregates scores high GPA and student with low admission aggregates scores low GPA at semester one, but on the average students with Low and Medium Entry Aggregate score higher GPA than students with High Entry Aggregate. The performance of GSS students is better as compare to that PSS at semester one and on the average. Meanwhile, in all the models it appeared, student GPA’s increase from semester one to semester three and decreases after semester three. Generally students tend to perform better at the third semester. The analysis also revealed that the academic performance is dependent on the SSA, Entry Age, Entry Aggregate and Gender.
This paper discussed the longitudinal studies of random effect model on academic performance of student using Federal University of Technology, Owerri Imo State Nigeria as a case study. Secondary data were adopted for the research work, and a SAS software package was used for the analysis. There appears to be some curvature in the average trend and individual profile plots, and hence a quadratic time effect was fitted to the data. From the individual profiles are the total observations collected for the analysis. From the profiles of the type of SSA, Entry Age, Entry Aggregate and Gender, it could be assumed that each profiles evolution follows a quadratic trend. Also, it could be concluded that most students who started with low GPA at semester one, improved in their performance to semester three and there was a downward trend before semester seven. Further, the mean profile for SSA was explored. From the chosen model among all models fitted to the data set, we conclude based on the results obtained that student’s GPA depends on the SSA, Entry Age, Entry Aggregate and Gender). Student with high and medium admission aggregates scores high GPA and student with low admission aggregates scores low GPA at semester one, but on the average students with Low and Medium Entry Aggregate score higher GPA than students with High Entry Aggregate. The performance of GSS students is better as compare to that PSS at semester one and on the average. Meanwhile, in all the models it appeared, student GPA’s increase from semester one to semester three and decreases after semester three. Generally students tend to perform better at the third semester. The analysis also revealed that the academic performance is dependent on the SSA, Entry Age, Entry Aggregate and Gender.
Longitudinal Studies of Random Effect Model on Academic Performance of Undergraduate Students
doi:10.11648/j.sjams.20130105.17
Science Journal of Applied Mathematics and Statistics
2014-01-01
© Science Publishing Group
Chukwudi Justine Ogbonna
Opara Jude
Longitudinal Studies of Random Effect Model on Academic Performance of Undergraduate Students
1
5
97
97
2014-01-01
2014-01-01
10.11648/j.sjams.20130105.17
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20130105.17
© Science Publishing Group
Robust Covariance Estimator for Small-Sample Adjustment in the Generalized Estimating Equations: A Simulation Study
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140201.13
The robust or sandwich estimator is common to estimate the covariance matrix of the estimated regression parameter for generalized estimating equation (GEE) method to analyze longitudinal data. However, the robust estimator would underestimate the variance under a small sample size. We propose an alternative covariance estimator to the robust estimator to improve the small-sample bias in the GEE method. Our proposed estimator is a modification of the bias-corrected covariance estimator proposed by Pan (2001, Biometrika88, 901—906) for the GEE method. In a simulation study, we compared the proposed covariance estimator to the robust estimator and Pan's estimator for continuous and binominallongitudinal responses for involving 10—50 subjects. The test size of Wald-type test statistics for the proposed estimator is relatively close to the nominal level when compared with those for the robust estimator and the Pan's approach.
The robust or sandwich estimator is common to estimate the covariance matrix of the estimated regression parameter for generalized estimating equation (GEE) method to analyze longitudinal data. However, the robust estimator would underestimate the variance under a small sample size. We propose an alternative covariance estimator to the robust estimator to improve the small-sample bias in the GEE method. Our proposed estimator is a modification of the bias-corrected covariance estimator proposed by Pan (2001, Biometrika88, 901—906) for the GEE method. In a simulation study, we compared the proposed covariance estimator to the robust estimator and Pan's estimator for continuous and binominallongitudinal responses for involving 10—50 subjects. The test size of Wald-type test statistics for the proposed estimator is relatively close to the nominal level when compared with those for the robust estimator and the Pan's approach.
Robust Covariance Estimator for Small-Sample Adjustment in the Generalized Estimating Equations: A Simulation Study
doi:10.11648/j.sjams.20140201.13
Science Journal of Applied Mathematics and Statistics
2014-01-01
© Science Publishing Group
Masahiko Gosho
Yasunori Sato
Hisao Takeuchi
Robust Covariance Estimator for Small-Sample Adjustment in the Generalized Estimating Equations: A Simulation Study
2
1
25
25
2014-01-01
2014-01-01
10.11648/j.sjams.20140201.13
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140201.13
© Science Publishing Group
Modelling and Forecasting Malaria Mortality Rate using SARIMA Models (A Case Study of Aboh Mbaise General Hospital, Imo State Nigeria)
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140201.15
This paper examined the modeling and forecasting malaria mortality rate using SARIMA Models. Among the most effective approaches for analysing time series data is the method propounded by Box and Jenkins, the Autoregressive Integrated Moving Average (ARIMA). In this paper, we employed Box-Jenkins methodology to build ARIMA model for malaria mortality rate for the period January 1996 to December 2013 with a total of 216 data points. The model obtained in this paper was used to forecast monthly malaria mortality rate for the upcoming year 2014. The forecasted results will help Government and medical professionals to see how to maintain steady decrease of malaria mortality in other to combat the predicted rise in mortality rate envisaged in some months.
This paper examined the modeling and forecasting malaria mortality rate using SARIMA Models. Among the most effective approaches for analysing time series data is the method propounded by Box and Jenkins, the Autoregressive Integrated Moving Average (ARIMA). In this paper, we employed Box-Jenkins methodology to build ARIMA model for malaria mortality rate for the period January 1996 to December 2013 with a total of 216 data points. The model obtained in this paper was used to forecast monthly malaria mortality rate for the upcoming year 2014. The forecasted results will help Government and medical professionals to see how to maintain steady decrease of malaria mortality in other to combat the predicted rise in mortality rate envisaged in some months.
Modelling and Forecasting Malaria Mortality Rate using SARIMA Models (A Case Study of Aboh Mbaise General Hospital, Imo State Nigeria)
doi:10.11648/j.sjams.20140201.15
Science Journal of Applied Mathematics and Statistics
2014-01-01
© Science Publishing Group
Ekezie Dan Dan
Opara Jude
Okenwe Idochi
Modelling and Forecasting Malaria Mortality Rate using SARIMA Models (A Case Study of Aboh Mbaise General Hospital, Imo State Nigeria)
2
1
41
41
2014-01-01
2014-01-01
10.11648/j.sjams.20140201.15
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140201.15
© Science Publishing Group
Mathematical Problem Appearing in Industrial Lumber Drying: A Review
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140201.14
This article is a review of our work on the modeling of lumber drying that we have started in 2003. We consider a lumber drying process in a kiln chamber where from mathematical point of views, this is an initial and boundary value problem. The Moisture Content (MC) is measured at the center of the lumber by applying a nail that thousands times of the pore size of the wood. This leads to apply macro modeling for the diffusion process of the water inside the lumber. MC acts as the state variable u of the thickness x and time t. The state variable satisfies a diffusion equation. The Equilibrium Moisture Content (EMC) of the air acts as the boundary condition. We report the progress on mathematical modeling and compared the results with data from industry.
This article is a review of our work on the modeling of lumber drying that we have started in 2003. We consider a lumber drying process in a kiln chamber where from mathematical point of views, this is an initial and boundary value problem. The Moisture Content (MC) is measured at the center of the lumber by applying a nail that thousands times of the pore size of the wood. This leads to apply macro modeling for the diffusion process of the water inside the lumber. MC acts as the state variable u of the thickness x and time t. The state variable satisfies a diffusion equation. The Equilibrium Moisture Content (EMC) of the air acts as the boundary condition. We report the progress on mathematical modeling and compared the results with data from industry.
Mathematical Problem Appearing in Industrial Lumber Drying: A Review
doi:10.11648/j.sjams.20140201.14
Science Journal of Applied Mathematics and Statistics
2014-01-01
© Science Publishing Group
Edi Cahyono
Mathematical Problem Appearing in Industrial Lumber Drying: A Review
2
1
30
30
2014-01-01
2014-01-01
10.11648/j.sjams.20140201.14
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140201.14
© Science Publishing Group
Comparative Study of Portmanteau Tests for the Residuals Autocorrelation in ARMA Models
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140201.11
The portmanteau statistic for testing the adequacy of an autoregressive moving average (ARMA) model is based on the first m autocorrelations of the residuals from the fitted model. We consider some of portmanteau tests for univariate linear time series such as Box and Pierce [2], Ljung and Box [9], Monti [12], Peña and Rodríguez [13 and 14], Generalized Variance Test (Gvtest) by Mahdi and McLeod [11] and Fisher [4]. We conduct an extensive computer simulation time series data, to make comparison among these tests. We consider different model parameters for small, moderate and large samples to examine the effect of lag m on the power of the selected tests, and determine the most powerful test for ARMA models. The similar portmanteau tests models was evaluated for the real data set on electricity consumption in Khan Younis, Palestine (April 2009 - May 2013). We found that, portmanteau tests have the highest values of power for large sample data (N = 500) comparing to small and moderate samples (N = 50 and 200). We found that the portmanteau tests are sensitive to the chosen for m value. Indeed there are loss of the power values for lags m ranging from m = 5 to 20, where Box-Pierce, Ljung-Box and Monti tests have more power loss than the other selected tests. The power loss reaches its minimum values for large sample data comparing to small and moderate samples. In addition, the results of the simulation study and real data analysis showed that the most powerful tests varies between Gvtest and Fisher tests.
The portmanteau statistic for testing the adequacy of an autoregressive moving average (ARMA) model is based on the first m autocorrelations of the residuals from the fitted model. We consider some of portmanteau tests for univariate linear time series such as Box and Pierce [2], Ljung and Box [9], Monti [12], Peña and Rodríguez [13 and 14], Generalized Variance Test (Gvtest) by Mahdi and McLeod [11] and Fisher [4]. We conduct an extensive computer simulation time series data, to make comparison among these tests. We consider different model parameters for small, moderate and large samples to examine the effect of lag m on the power of the selected tests, and determine the most powerful test for ARMA models. The similar portmanteau tests models was evaluated for the real data set on electricity consumption in Khan Younis, Palestine (April 2009 - May 2013). We found that, portmanteau tests have the highest values of power for large sample data (N = 500) comparing to small and moderate samples (N = 50 and 200). We found that the portmanteau tests are sensitive to the chosen for m value. Indeed there are loss of the power values for lags m ranging from m = 5 to 20, where Box-Pierce, Ljung-Box and Monti tests have more power loss than the other selected tests. The power loss reaches its minimum values for large sample data comparing to small and moderate samples. In addition, the results of the simulation study and real data analysis showed that the most powerful tests varies between Gvtest and Fisher tests.
Comparative Study of Portmanteau Tests for the Residuals Autocorrelation in ARMA Models
doi:10.11648/j.sjams.20140201.11
Science Journal of Applied Mathematics and Statistics
2014-01-01
© Science Publishing Group
Samir K. Safi
Alaa A. Al-Reqep
Comparative Study of Portmanteau Tests for the Residuals Autocorrelation in ARMA Models
2
1
13
13
2014-01-01
2014-01-01
10.11648/j.sjams.20140201.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140201.11
© Science Publishing Group
Fuzzy Goal Programming to Optimization the Multi-Objective Problem
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140201.12
Many present-day problems are multi-objective in nature and their solution requires consideration of conflicting objectives. Usually, they have a number of potentially Pareto-optimal solutions. An extensive knowledge of the problem is required in discriminating between solutions, eliminating the unwanted ones and accepting the required solution(s) by a decision making process. It is well known that multi-objective optimization model had found a lot of important applications in decision making problems such as in economics theory, management science and engineering design. Because of these applications, a lot of literatures have been published to study optimality conditions, duality theories and topological properties of solutions of multi-objective optimization problems. In the case of optimization problems, the idea of regularizing a problem by adding a strongly convex term to the objective function can actually be treated back at least. The regularization technique proved to be an invaluable tool in the solution of ill-posed problems, and an enormous amount of work has been devoted to its study. In this paper, a Multi-objective Optimization Problems formulation based on a Goal Programming Methods solves the multi-objective problem which can tackle relatively large test systems. This method is based on optimization of the most preferred objective and considering the other objectives as constraints.
Many present-day problems are multi-objective in nature and their solution requires consideration of conflicting objectives. Usually, they have a number of potentially Pareto-optimal solutions. An extensive knowledge of the problem is required in discriminating between solutions, eliminating the unwanted ones and accepting the required solution(s) by a decision making process. It is well known that multi-objective optimization model had found a lot of important applications in decision making problems such as in economics theory, management science and engineering design. Because of these applications, a lot of literatures have been published to study optimality conditions, duality theories and topological properties of solutions of multi-objective optimization problems. In the case of optimization problems, the idea of regularizing a problem by adding a strongly convex term to the objective function can actually be treated back at least. The regularization technique proved to be an invaluable tool in the solution of ill-posed problems, and an enormous amount of work has been devoted to its study. In this paper, a Multi-objective Optimization Problems formulation based on a Goal Programming Methods solves the multi-objective problem which can tackle relatively large test systems. This method is based on optimization of the most preferred objective and considering the other objectives as constraints.
Fuzzy Goal Programming to Optimization the Multi-Objective Problem
doi:10.11648/j.sjams.20140201.12
Science Journal of Applied Mathematics and Statistics
2014-01-01
© Science Publishing Group
Azzabi Lotfi
Ayadi Dorra
Bachar Kaddour
Kobi Abdessamad
Fuzzy Goal Programming to Optimization the Multi-Objective Problem
2
1
19
19
2014-01-01
2014-01-01
10.11648/j.sjams.20140201.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140201.12
© Science Publishing Group
Measuring Portfolio Loss Using Approximation Methods
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140202.11
One of the approaches to determining and quantifying the credit risk of a loan portfolio is by obtaining the distribution of losses of the portfolio and determining the risk quantities from such distributions. In this paper, we describe the challenges to using this approach and illustrate a practical solution where simulation methods are used to obtain loss distribution for a two obligor portfolio. This is then extended to ten and hundred obligor portfolios. Existing probability distributions with specified parameters are then used to approximate the loss distributions obtained. Using such parameters of the existing probability distributions, we obtain the risk quantities associated with the loan portfolio including Expected and Unexpected losses. We realized that depending on the confidence interval for which we measure the Unexpected Loss, Stress Losses are needed to account for the total loss of the portfolio
One of the approaches to determining and quantifying the credit risk of a loan portfolio is by obtaining the distribution of losses of the portfolio and determining the risk quantities from such distributions. In this paper, we describe the challenges to using this approach and illustrate a practical solution where simulation methods are used to obtain loss distribution for a two obligor portfolio. This is then extended to ten and hundred obligor portfolios. Existing probability distributions with specified parameters are then used to approximate the loss distributions obtained. Using such parameters of the existing probability distributions, we obtain the risk quantities associated with the loan portfolio including Expected and Unexpected losses. We realized that depending on the confidence interval for which we measure the Unexpected Loss, Stress Losses are needed to account for the total loss of the portfolio
Measuring Portfolio Loss Using Approximation Methods
doi:10.11648/j.sjams.20140202.11
Science Journal of Applied Mathematics and Statistics
2014-04-16
© Science Publishing Group
Osei Antwi
Measuring Portfolio Loss Using Approximation Methods
2
2
52
52
2014-04-16
2014-04-16
10.11648/j.sjams.20140202.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140202.11
© Science Publishing Group
A Probabilistic Estimation of the Basic Reproduction Number: A Case of Control Strategy of Pneumonia
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140202.12
Deterministic models have been used in the past to understand the epidemiology of infectious diseases, most importantly to estimate the basic reproduction number, Ro by using disease parameters. However, the approach overlooks variation on the disease parameter(s) which are function of Ro and can introduce random effect on Ro. In this paper, we estimate the Ro as a random variable by first developing and analyzing a deterministic model for transmission patterns of pneumonia, and then compute the probability distribution of Ro using Monte Carlo Markov Chain (MCMC) simulation approach. A detailed analysis of the simulated transmission data, leads to probability distribution of Ro as opposed to a single value in the convectional deterministic modeling approach. Results indicate that there is sufficient information generated when uncertainty is considered in the computation of Ro and can be used to describe the effect of parameter change in deterministic models
Deterministic models have been used in the past to understand the epidemiology of infectious diseases, most importantly to estimate the basic reproduction number, Ro by using disease parameters. However, the approach overlooks variation on the disease parameter(s) which are function of Ro and can introduce random effect on Ro. In this paper, we estimate the Ro as a random variable by first developing and analyzing a deterministic model for transmission patterns of pneumonia, and then compute the probability distribution of Ro using Monte Carlo Markov Chain (MCMC) simulation approach. A detailed analysis of the simulated transmission data, leads to probability distribution of Ro as opposed to a single value in the convectional deterministic modeling approach. Results indicate that there is sufficient information generated when uncertainty is considered in the computation of Ro and can be used to describe the effect of parameter change in deterministic models
A Probabilistic Estimation of the Basic Reproduction Number: A Case of Control Strategy of Pneumonia
doi:10.11648/j.sjams.20140202.12
Science Journal of Applied Mathematics and Statistics
2014-04-16
© Science Publishing Group
Ong’ala Jacob Otieno
Mugisha Joseph
Oleche Paul
A Probabilistic Estimation of the Basic Reproduction Number: A Case of Control Strategy of Pneumonia
2
2
59
59
2014-04-16
2014-04-16
10.11648/j.sjams.20140202.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140202.12
© Science Publishing Group
Assessing Public Awareness about the Health Effects of Nicotine and Cigarettes Using Negative Binomial Regression
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140203.11
Both the public and private sectors have acted responsibly to help decrease smoking-related deaths by putting health warnings on all cigarette packages. This study investigated the social or demographic factors associated with public awareness of health warnings on the harmful effects of environmental tobacco smoke based on baseline data collected by the health bureau of Amhara Region (in Ethiopia). Respondents in the survey were asked to recall the number of anti-smoking messages which appeared as warning messages on cigarette advertisements. The number of anti-smoking messages recalled ranged from 0 to 7 with a mean of 2.90 (variance of 3.11) and a median of 3.00. Because the variance (3.11) was different from mean (2.9), the negative binomial regression model provided an improved fit to the data and accounted better for over dispersion than the Poisson regression model, which assumed that the mean and variance are the same. The level of education was found to be the most significant factors. Moreover, the lower income socio-economic class nonsmokers’ anti-smoking messages recalling rate was 2.5 times that of the lower socio-economic class smokers. Unlike men, women’s anti-smoking message response rate increased with income.
Both the public and private sectors have acted responsibly to help decrease smoking-related deaths by putting health warnings on all cigarette packages. This study investigated the social or demographic factors associated with public awareness of health warnings on the harmful effects of environmental tobacco smoke based on baseline data collected by the health bureau of Amhara Region (in Ethiopia). Respondents in the survey were asked to recall the number of anti-smoking messages which appeared as warning messages on cigarette advertisements. The number of anti-smoking messages recalled ranged from 0 to 7 with a mean of 2.90 (variance of 3.11) and a median of 3.00. Because the variance (3.11) was different from mean (2.9), the negative binomial regression model provided an improved fit to the data and accounted better for over dispersion than the Poisson regression model, which assumed that the mean and variance are the same. The level of education was found to be the most significant factors. Moreover, the lower income socio-economic class nonsmokers’ anti-smoking messages recalling rate was 2.5 times that of the lower socio-economic class smokers. Unlike men, women’s anti-smoking message response rate increased with income.
Assessing Public Awareness about the Health Effects of Nicotine and Cigarettes Using Negative Binomial Regression
doi:10.11648/j.sjams.20140203.11
Science Journal of Applied Mathematics and Statistics
2014-06-16
© Science Publishing Group
Awoke Seyoum Tegegne
Assessing Public Awareness about the Health Effects of Nicotine and Cigarettes Using Negative Binomial Regression
2
3
65
65
2014-06-16
2014-06-16
10.11648/j.sjams.20140203.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140203.11
© Science Publishing Group
Quantile Regression in Statistical Downscaling to Estimate Extreme Monthly Rainfall
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140203.12
Extreme rainfall events have been great interest in statistical downscaling. This paper concerns with developing model of statistical downscaling using quantile regression to estimate extreme monthly rainfall. Statistical downscaling relates functionally local scale response variable and global scale predictor variables. The response variable is monthly rainfall from 1979 to 2008 at station Bangkir Indonesia and the predictor variables are monthly precipitation of 64 grid of Global Circulation Model output in the same period. Principal Component Analysis is used to reduce dimension of predictors. A number of components for developing quantile regression model are determined based on Quantile Verification Skill Score. The results show that at 95th quantile the pattern of forecasted rainfall in January to December 2008 is similar to actual rainfall with correlation 0.98 and the forecasted rainfall (843 mm) in February 2008 is considered as the extreme rainfall which confirms well to the highest actual rainfall (727 mm) with probability 0.99.
Extreme rainfall events have been great interest in statistical downscaling. This paper concerns with developing model of statistical downscaling using quantile regression to estimate extreme monthly rainfall. Statistical downscaling relates functionally local scale response variable and global scale predictor variables. The response variable is monthly rainfall from 1979 to 2008 at station Bangkir Indonesia and the predictor variables are monthly precipitation of 64 grid of Global Circulation Model output in the same period. Principal Component Analysis is used to reduce dimension of predictors. A number of components for developing quantile regression model are determined based on Quantile Verification Skill Score. The results show that at 95th quantile the pattern of forecasted rainfall in January to December 2008 is similar to actual rainfall with correlation 0.98 and the forecasted rainfall (843 mm) in February 2008 is considered as the extreme rainfall which confirms well to the highest actual rainfall (727 mm) with probability 0.99.
Quantile Regression in Statistical Downscaling to Estimate Extreme Monthly Rainfall
doi:10.11648/j.sjams.20140203.12
Science Journal of Applied Mathematics and Statistics
2014-07-14
© Science Publishing Group
Aji Hamim Wigena
Anik Djuraidah
Quantile Regression in Statistical Downscaling to Estimate Extreme Monthly Rainfall
2
3
70
70
2014-07-14
2014-07-14
10.11648/j.sjams.20140203.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140203.12
© Science Publishing Group
Solution of Multi-Objective Transportation Problem Via Fuzzy Programming Algorithm
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140204.11
This paper is on the solution of multi-objective transportation problem via fuzzy programming algorithm. The data for this paper was collected by an egg dealer in whose main office is located at Orji Owerri Imo State Nigeria, who supplies the product to different wholesalers (destinations) after taking it from different poultry farm (sources), and the time and cost of transportation from source i to destination j were recorded. TORA statistical software was employed in the data analysis, and the results of the analysis revealed that if we use the hyperbolic membership function, then the crisp model becomes linear. The result also revealed that the optimal compromise solution does not alter if we compare it with the solution obtained by the linear membership function. Thus, if we compare it with the solution obtained by the linear membership function, it is shown that the fuzzy optimal values do not depend on the chosen membership function be it linear or non-linear membership function.
This paper is on the solution of multi-objective transportation problem via fuzzy programming algorithm. The data for this paper was collected by an egg dealer in whose main office is located at Orji Owerri Imo State Nigeria, who supplies the product to different wholesalers (destinations) after taking it from different poultry farm (sources), and the time and cost of transportation from source i to destination j were recorded. TORA statistical software was employed in the data analysis, and the results of the analysis revealed that if we use the hyperbolic membership function, then the crisp model becomes linear. The result also revealed that the optimal compromise solution does not alter if we compare it with the solution obtained by the linear membership function. Thus, if we compare it with the solution obtained by the linear membership function, it is shown that the fuzzy optimal values do not depend on the chosen membership function be it linear or non-linear membership function.
Solution of Multi-Objective Transportation Problem Via Fuzzy Programming Algorithm
doi:10.11648/j.sjams.20140204.11
Science Journal of Applied Mathematics and Statistics
2014-07-22
© Science Publishing Group
Osuji, George A.
Okoli Cecilia N.
Opara, Jude
Solution of Multi-Objective Transportation Problem Via Fuzzy Programming Algorithm
2
4
77
77
2014-07-22
2014-07-22
10.11648/j.sjams.20140204.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140204.11
© Science Publishing Group
Dynamic Pricing Research on Perishable Products under Consumer Strategy Behaviour
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140204.12
It mainly focuses the consumer strategy behavior effect on retailers pricing mechanism. Under the condition of uncertainty demand and deterministic demand, consumers’ strategy behavior influence to price and profit. By introducing a discount factor, considering inventory timely complement and fixed inventory in two cases, obtains purchase decision and dynamic pricing strategies of consumers.
It mainly focuses the consumer strategy behavior effect on retailers pricing mechanism. Under the condition of uncertainty demand and deterministic demand, consumers’ strategy behavior influence to price and profit. By introducing a discount factor, considering inventory timely complement and fixed inventory in two cases, obtains purchase decision and dynamic pricing strategies of consumers.
Dynamic Pricing Research on Perishable Products under Consumer Strategy Behaviour
doi:10.11648/j.sjams.20140204.12
Science Journal of Applied Mathematics and Statistics
2014-08-05
© Science Publishing Group
Li Zhou
Jing Li
Dynamic Pricing Research on Perishable Products under Consumer Strategy Behaviour
2
4
84
84
2014-08-05
2014-08-05
10.11648/j.sjams.20140204.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140204.12
© Science Publishing Group
Possibility of Compiling Results of Individual Trials under a Single Model
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140204.13
The purpose of this work is to compile individual trials conducted at various locations and times in order to build and optimize a theoretical factorial model. A factorial plan is formed using planting time, plant density, nitrogen, phosphor and irrigation water data collected from trials conducted at Nazilli Cotton Research Station in 1966, 1967, 1971 and 1972. Using infinitesimal calculus theoretical combinations were formed, and individual and final R values were calculated. This was done by equalizing the different individual R values and levels belonging to independent variables. 1. Where the level numbers of the factors are non-recurrent (without frequency) and unequal; a) The level number of the factor with the largest level number should be accepted as the common level number. b) The individual (R) values total of the factor in question should be accepted as the final limit. c) The common individual (R) values total of the factor in question should be slightly lower than the final limit value. d) Individual (R) values calculated for each factor by finite infinitesimal calculus should be smaller than the largest individual (R) value of the factor in question. e) Finite infinitesimal calculus calculation should be started from the factor with the smallest level number. f) The largest valued total calculated on the factor in question and meeting the conditions in question (equalized R values totals and level numbers) should be accepted as the common total. 2. In case the level numbers of the factors consist of recurrent (with frequency) and non-recurrent (without frequency) groups, the calculations should be based on the group with the largest frequency. The operations defined under item 1 above shall also be applicable here. 3. In case the level numbers of the factors are non-recurrent (without frequency) and equal, the operations defined under item 1 above shall also be applicable here. The relative effects of the factors on the maximum yield level are given below: 25.224 % for planting time, 17.2245 % for plant density, 25.904 % for nitrogen, 13.904 % for phosphor, and 17.90 % for water. Here, five square squares of 5x5 are formed and 125 combinations are derived. Maximization was done by putting the individual R values with the largest final R value among the 125 combination in place in the formula R= -0.6080+ R_E+ R_B+ R_N+ R_P+ R_S , and maximum R value was calculated as R_max=752.110 kg/decare
The purpose of this work is to compile individual trials conducted at various locations and times in order to build and optimize a theoretical factorial model. A factorial plan is formed using planting time, plant density, nitrogen, phosphor and irrigation water data collected from trials conducted at Nazilli Cotton Research Station in 1966, 1967, 1971 and 1972. Using infinitesimal calculus theoretical combinations were formed, and individual and final R values were calculated. This was done by equalizing the different individual R values and levels belonging to independent variables. 1. Where the level numbers of the factors are non-recurrent (without frequency) and unequal; a) The level number of the factor with the largest level number should be accepted as the common level number. b) The individual (R) values total of the factor in question should be accepted as the final limit. c) The common individual (R) values total of the factor in question should be slightly lower than the final limit value. d) Individual (R) values calculated for each factor by finite infinitesimal calculus should be smaller than the largest individual (R) value of the factor in question. e) Finite infinitesimal calculus calculation should be started from the factor with the smallest level number. f) The largest valued total calculated on the factor in question and meeting the conditions in question (equalized R values totals and level numbers) should be accepted as the common total. 2. In case the level numbers of the factors consist of recurrent (with frequency) and non-recurrent (without frequency) groups, the calculations should be based on the group with the largest frequency. The operations defined under item 1 above shall also be applicable here. 3. In case the level numbers of the factors are non-recurrent (without frequency) and equal, the operations defined under item 1 above shall also be applicable here. The relative effects of the factors on the maximum yield level are given below: 25.224 % for planting time, 17.2245 % for plant density, 25.904 % for nitrogen, 13.904 % for phosphor, and 17.90 % for water. Here, five square squares of 5x5 are formed and 125 combinations are derived. Maximization was done by putting the individual R values with the largest final R value among the 125 combination in place in the formula R= -0.6080+ R_E+ R_B+ R_N+ R_P+ R_S , and maximum R value was calculated as R_max=752.110 kg/decare
Possibility of Compiling Results of Individual Trials under a Single Model
doi:10.11648/j.sjams.20140204.13
Science Journal of Applied Mathematics and Statistics
2014-08-07
© Science Publishing Group
Yunus Babur
Possibility of Compiling Results of Individual Trials under a Single Model
2
4
90
90
2014-08-07
2014-08-07
10.11648/j.sjams.20140204.13
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140204.13
© Science Publishing Group
Validating Regression Models to Assess Factors for Motorcycle Accidents in Tanzania
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140205.12
There are several ways in which regression model can be validated. One of the ways is a collection of new data to check model predictions (Snee, 1977). In Nyakyi et al. (2014) a multiple regression model was formulated, analyzed, and discussed to assess motorcycle accidents in Tanzania. The model included several factors which are wrong overtaking, legal status of not owning license, rough road, high speed, mechanical defect, personal status, experience of a driver, and tarmac road. All these factors were considered to be the causes of motorcycle accidents in Tanzania, specifically in Arusha and Kilimanjaro regions. This paper presents the analysis and discussion of the accidents problem using new data collected through questionnaire; the study's results are then compared with the results, predicted by the multiple regression models, for validation. Questionnaires were designed to access the extent to which identified factors cause motorcycle accidents, and the data is obtained from motorcycle stake holders who in this study are considered to have reliable information about motorcycle accidents in Tanzania. The results of the validation indicate that there is a good correlation between the data obtained from the questionnaire and the data produced by the regression model.
There are several ways in which regression model can be validated. One of the ways is a collection of new data to check model predictions (Snee, 1977). In Nyakyi et al. (2014) a multiple regression model was formulated, analyzed, and discussed to assess motorcycle accidents in Tanzania. The model included several factors which are wrong overtaking, legal status of not owning license, rough road, high speed, mechanical defect, personal status, experience of a driver, and tarmac road. All these factors were considered to be the causes of motorcycle accidents in Tanzania, specifically in Arusha and Kilimanjaro regions. This paper presents the analysis and discussion of the accidents problem using new data collected through questionnaire; the study's results are then compared with the results, predicted by the multiple regression models, for validation. Questionnaires were designed to access the extent to which identified factors cause motorcycle accidents, and the data is obtained from motorcycle stake holders who in this study are considered to have reliable information about motorcycle accidents in Tanzania. The results of the validation indicate that there is a good correlation between the data obtained from the questionnaire and the data produced by the regression model.
Validating Regression Models to Assess Factors for Motorcycle Accidents in Tanzania
doi:10.11648/j.sjams.20140205.12
Science Journal of Applied Mathematics and Statistics
2014-09-27
© Science Publishing Group
Vicent Paul Nyakyi
Dmitry Kuznetsov
Yaw Nkansah-Gyekye
Validating Regression Models to Assess Factors for Motorcycle Accidents in Tanzania
2
5
101
101
2014-09-27
2014-09-27
10.11648/j.sjams.20140205.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140205.12
© Science Publishing Group
Analysis of Factors that Affect Road Traffic Accidents in Bahir Dar City, North Western Ethiopia
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140205.11
Traffic accident is increased from time to time in alarming rate and it is a serious problem throughout the globe, particularly, in developing countries like Ethiopia. In this study our aim was to identify the major factors that affect the occurrence of traffic accidents at Bahir Dar city, North Western Ethiopia. The drivers were selected using simple random sampling methods and descriptive statistics, Chi-square and Binary logistic regression methods have been used for data analysis.The Hosmer and Lemeshow test showed that the model fits the data very well. And from the result we have seen that drivers giving priority stated by the law, pedestrian’s manner while crossing the road, drivers usage of seat belt have statistically significant impact for the occurrence of traffic accidents in the city. Pedestrian’s manner while crossing the road is one of the significant variable for the occurrence of road traffic accidents in Bahir Dar city. Therefore, the traffic polices or concerned body should give trainings for pedestrians about traffic accidents. To minimize the road traffic accidents, the government should set the rules to use the seat belt so that the drivers should apply it.
Traffic accident is increased from time to time in alarming rate and it is a serious problem throughout the globe, particularly, in developing countries like Ethiopia. In this study our aim was to identify the major factors that affect the occurrence of traffic accidents at Bahir Dar city, North Western Ethiopia. The drivers were selected using simple random sampling methods and descriptive statistics, Chi-square and Binary logistic regression methods have been used for data analysis.The Hosmer and Lemeshow test showed that the model fits the data very well. And from the result we have seen that drivers giving priority stated by the law, pedestrian’s manner while crossing the road, drivers usage of seat belt have statistically significant impact for the occurrence of traffic accidents in the city. Pedestrian’s manner while crossing the road is one of the significant variable for the occurrence of road traffic accidents in Bahir Dar city. Therefore, the traffic polices or concerned body should give trainings for pedestrians about traffic accidents. To minimize the road traffic accidents, the government should set the rules to use the seat belt so that the drivers should apply it.
Analysis of Factors that Affect Road Traffic Accidents in Bahir Dar City, North Western Ethiopia
doi:10.11648/j.sjams.20140205.11
Science Journal of Applied Mathematics and Statistics
2014-09-27
© Science Publishing Group
Haile Mekonnen Fenta
Demeke Lakew Workie
Analysis of Factors that Affect Road Traffic Accidents in Bahir Dar City, North Western Ethiopia
2
5
96
96
2014-09-27
2014-09-27
10.11648/j.sjams.20140205.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140205.11
© Science Publishing Group
Research on Income Influence of Enterprise Perishable under Consumer Strategy Behaviour
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140205.13
This paper studies the consumer strategy behavior impact on earnings of enterprises perishable products. It mainly considers the impact on the income from the income and cost analysis, and other factors how to influence the two factors, thus indirectly affect the retailer's cost. The conclusion of the study laid the theoretical foundation for the dynamic pricing for perishable products.
This paper studies the consumer strategy behavior impact on earnings of enterprises perishable products. It mainly considers the impact on the income from the income and cost analysis, and other factors how to influence the two factors, thus indirectly affect the retailer's cost. The conclusion of the study laid the theoretical foundation for the dynamic pricing for perishable products.
Research on Income Influence of Enterprise Perishable under Consumer Strategy Behaviour
doi:10.11648/j.sjams.20140205.13
Science Journal of Applied Mathematics and Statistics
2014-10-22
© Science Publishing Group
Li Zhou
Jing Li
Dongsheng Xu
Research on Income Influence of Enterprise Perishable under Consumer Strategy Behaviour
2
5
106
106
2014-10-22
2014-10-22
10.11648/j.sjams.20140205.13
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140205.13
© Science Publishing Group
An Alternative Estimator for Estimating the Finite Population Mean in Presence of Measurement Errors with the View to Financial Modelling
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140206.11
This article presents the problem of estimating the population mean using auxiliary information in the presence of measurement errors. We have compared the three proposed estimators being the exponential ratio-type estimator, Solanki et al. (2012) estimator, and the mean per unit estimator in the presence of measurement errors. Financial Model by Gujrati and Sangeetha (2007) has been employed in our empirical analysis. In that, our investigation has indicated that our proposed general class of estimator t4 is the most suitable estimator with a smaller MSE relative to other estimators under measurement errors.
This article presents the problem of estimating the population mean using auxiliary information in the presence of measurement errors. We have compared the three proposed estimators being the exponential ratio-type estimator, Solanki et al. (2012) estimator, and the mean per unit estimator in the presence of measurement errors. Financial Model by Gujrati and Sangeetha (2007) has been employed in our empirical analysis. In that, our investigation has indicated that our proposed general class of estimator t4 is the most suitable estimator with a smaller MSE relative to other estimators under measurement errors.
An Alternative Estimator for Estimating the Finite Population Mean in Presence of Measurement Errors with the View to Financial Modelling
doi:10.11648/j.sjams.20140206.11
Science Journal of Applied Mathematics and Statistics
2014-11-18
© Science Publishing Group
Rajesh Singh
Sachin Malik
Mohd Khoshnevisan
An Alternative Estimator for Estimating the Finite Population Mean in Presence of Measurement Errors with the View to Financial Modelling
2
6
111
111
2014-11-18
2014-11-18
10.11648/j.sjams.20140206.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140206.11
© Science Publishing Group
On Solving Some Classes of Nonlinear Fractional Differentional Equations Using Fractal Index Method
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140206.12
We provide a new solution of diffusion fractional differential equation using fractal index and fractional sub-equation method. Also we shall impose a new solution for fraction Birnolli equation of arbitrary order using the fractal index method. As a result many exact solutions are obtained. It is shown that our considered method provides a very effective tool for solving fractional differentional equations.
We provide a new solution of diffusion fractional differential equation using fractal index and fractional sub-equation method. Also we shall impose a new solution for fraction Birnolli equation of arbitrary order using the fractal index method. As a result many exact solutions are obtained. It is shown that our considered method provides a very effective tool for solving fractional differentional equations.
On Solving Some Classes of Nonlinear Fractional Differentional Equations Using Fractal Index Method
doi:10.11648/j.sjams.20140206.12
Science Journal of Applied Mathematics and Statistics
2014-12-19
© Science Publishing Group
Sayed K. Elagan
Mohamed S. Mohamed
Khaled A. Gepreel
Rabha W. Ibrahim
Afaf Elesimy
On Solving Some Classes of Nonlinear Fractional Differentional Equations Using Fractal Index Method
2
6
115
115
2014-12-19
2014-12-19
10.11648/j.sjams.20140206.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140206.12
© Science Publishing Group
A Multiplicative Autoregressive Integrated Moving Average Model for Kenya’s Inflation (2000:1 – 2013:12)
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140206.14
Using monthly inflation data from January 2000 to December 2013, we find that SARIMA (1,1,1)(1,0,1)12 can represent the data behavior of inflation rate in Kenya well. Based on the selected model, we forecast seven (12) months inflation rates of Kenya outside the sample period (i.e. from January 2014 to December 2014). The observed inflation rates from January to November which were published by Kenya Bureau of Statistics fall within the 95% confidence interval obtained from the designed model. However, the confidence intervals were wider an indication of high volatility of Kenya’s inflation rates.
Using monthly inflation data from January 2000 to December 2013, we find that SARIMA (1,1,1)(1,0,1)12 can represent the data behavior of inflation rate in Kenya well. Based on the selected model, we forecast seven (12) months inflation rates of Kenya outside the sample period (i.e. from January 2014 to December 2014). The observed inflation rates from January to November which were published by Kenya Bureau of Statistics fall within the 95% confidence interval obtained from the designed model. However, the confidence intervals were wider an indication of high volatility of Kenya’s inflation rates.
A Multiplicative Autoregressive Integrated Moving Average Model for Kenya’s Inflation (2000:1 – 2013:12)
doi:10.11648/j.sjams.20140206.14
Science Journal of Applied Mathematics and Statistics
2014-12-31
© Science Publishing Group
Nyabwanga Robert Nyamao
A Multiplicative Autoregressive Integrated Moving Average Model for Kenya’s Inflation (2000:1 – 2013:12)
2
6
129
129
2014-12-31
2014-12-31
10.11648/j.sjams.20140206.14
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140206.14
© Science Publishing Group
A Study of Profit and Failure Rate of Musharakah Mutanaqisah Homeownership Partnership (MMP) by Retrospective Actuarial Approach and Simulation Study: The Case of Abandoned Housing Project
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140206.13
The paper observed the impact of the profit rate and failure rate to the profit of Musharakah Mutanaqisah Homeownership Partnership (MMP) that exercise in local Islamic Bank in Malaysia while the probability of the housing project’s failure is considered into this facility. The retrospective method is a retrospective method of calculating the outstanding balance in actuarial mathematics. Moreover, this study derived a probability of abandoned housing project equations implementing from the actuarial mathematics. The failure rate is assumed to be exponentially distributed. The profit will varies due to the changes of variables like profit rate, failure rate and tenure. This study also presented the intention of IB in bearing the losses of MMP when abandoned housing project occurred.
The paper observed the impact of the profit rate and failure rate to the profit of Musharakah Mutanaqisah Homeownership Partnership (MMP) that exercise in local Islamic Bank in Malaysia while the probability of the housing project’s failure is considered into this facility. The retrospective method is a retrospective method of calculating the outstanding balance in actuarial mathematics. Moreover, this study derived a probability of abandoned housing project equations implementing from the actuarial mathematics. The failure rate is assumed to be exponentially distributed. The profit will varies due to the changes of variables like profit rate, failure rate and tenure. This study also presented the intention of IB in bearing the losses of MMP when abandoned housing project occurred.
A Study of Profit and Failure Rate of Musharakah Mutanaqisah Homeownership Partnership (MMP) by Retrospective Actuarial Approach and Simulation Study: The Case of Abandoned Housing Project
doi:10.11648/j.sjams.20140206.13
Science Journal of Applied Mathematics and Statistics
2014-12-31
© Science Publishing Group
Farhana Syed Ahmad
Shamsul Rijal Muhammad Sabri
A Study of Profit and Failure Rate of Musharakah Mutanaqisah Homeownership Partnership (MMP) by Retrospective Actuarial Approach and Simulation Study: The Case of Abandoned Housing Project
2
6
121
121
2014-12-31
2014-12-31
10.11648/j.sjams.20140206.13
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20140206.13
© Science Publishing Group
Linear Regression, Fundamental Issue in Training and Application of Engineering
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150301.11
In this paper the impact on student learning in teaching linear regression and correlation analyzes, making use of new information technologies (ICT) to support Project Descartes through tasks; that allow students to research results type of scatterplot equation of the line, analysis determining prognostic variables. To conduct this research took into account two groups 28 and one of 26 students, one of them use technology and other not; the two groups were taught a class mayéutica, performing exercises topic. Both groups developed the same tasks, the group with computer; for the analysis of results, we started from a classified to determine the answers, because the practice is based on tasks with graphical and application of equations. Thus evaluation codes that were used were as follows: 1. If you have any idea = Excellent, 2. has no idea = Good 3. He did not understand anything = Poor; reaching an average rate of 70.39%, of if you have no idea, 25.76% of those who have no idea 3.43%.
In this paper the impact on student learning in teaching linear regression and correlation analyzes, making use of new information technologies (ICT) to support Project Descartes through tasks; that allow students to research results type of scatterplot equation of the line, analysis determining prognostic variables. To conduct this research took into account two groups 28 and one of 26 students, one of them use technology and other not; the two groups were taught a class mayéutica, performing exercises topic. Both groups developed the same tasks, the group with computer; for the analysis of results, we started from a classified to determine the answers, because the practice is based on tasks with graphical and application of equations. Thus evaluation codes that were used were as follows: 1. If you have any idea = Excellent, 2. has no idea = Good 3. He did not understand anything = Poor; reaching an average rate of 70.39%, of if you have no idea, 25.76% of those who have no idea 3.43%.
Linear Regression, Fundamental Issue in Training and Application of Engineering
doi:10.11648/j.sjams.20150301.11
Science Journal of Applied Mathematics and Statistics
2015-01-20
© Science Publishing Group
Luz Elva Marín Vaca
Martha Lilia Domínguez Patiño
Nadia Lara Ruiz
Miguel Aguilar Cortes
Linear Regression, Fundamental Issue in Training and Application of Engineering
3
1
5
5
2015-01-20
2015-01-20
10.11648/j.sjams.20150301.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150301.11
© Science Publishing Group
A Power Law Extrapolation – Interpolation Method for IBNR Claims Reserving
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150301.12
To calculate claims reserves more frequently than the usual yearly periods for which ultimate loss development factors are available, it is necessary to perform an extrapolation prior to the time marking the end of the first development year and an interpolation for each successive development year. A simple power law extrapolation – interpolation method is developed and illustrated for monthly and quarterly sub-periods.
To calculate claims reserves more frequently than the usual yearly periods for which ultimate loss development factors are available, it is necessary to perform an extrapolation prior to the time marking the end of the first development year and an interpolation for each successive development year. A simple power law extrapolation – interpolation method is developed and illustrated for monthly and quarterly sub-periods.
A Power Law Extrapolation – Interpolation Method for IBNR Claims Reserving
doi:10.11648/j.sjams.20150301.12
Science Journal of Applied Mathematics and Statistics
2015-02-02
© Science Publishing Group
Werner Hürlimann
A Power Law Extrapolation – Interpolation Method for IBNR Claims Reserving
3
1
13
13
2015-02-02
2015-02-02
10.11648/j.sjams.20150301.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150301.12
© Science Publishing Group
Estimating Default Correlations Using Simulated Asset Values
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150301.13
We outline the ingredients necessary to compute the Joint Default Probability from which we obtain Default Correlation, an important risk quantity in the determination of Internal Rating Based Approach in Basel II and III documents on banking supervision and regulations. We discuss Merton’s structural approach of which one key drawback is the difficulty in tracking and calibrating asset value processes and the limitations of variant models which tend to be analytically too complex and compu¬tationally intensive. We address these issues by simulating all the possible asset value processes of a firm whose asset paths we assume to be Gaussian. By generating random values that simulate all the possible asset value processes, we are able to capture all the possible default horizons within a certain macroeconomic framework. Drawing standardised normally distributed assets values of obligors we obtain a range of values of Joint Default Probabilities at a specified asset correlation from which the corresponding range of default correlations are obtained. The results is a simplified approach to the determination of default correlation, easily implementable in Excel and less analytically complicated than existing procedures.
We outline the ingredients necessary to compute the Joint Default Probability from which we obtain Default Correlation, an important risk quantity in the determination of Internal Rating Based Approach in Basel II and III documents on banking supervision and regulations. We discuss Merton’s structural approach of which one key drawback is the difficulty in tracking and calibrating asset value processes and the limitations of variant models which tend to be analytically too complex and compu¬tationally intensive. We address these issues by simulating all the possible asset value processes of a firm whose asset paths we assume to be Gaussian. By generating random values that simulate all the possible asset value processes, we are able to capture all the possible default horizons within a certain macroeconomic framework. Drawing standardised normally distributed assets values of obligors we obtain a range of values of Joint Default Probabilities at a specified asset correlation from which the corresponding range of default correlations are obtained. The results is a simplified approach to the determination of default correlation, easily implementable in Excel and less analytically complicated than existing procedures.
Estimating Default Correlations Using Simulated Asset Values
doi:10.11648/j.sjams.20150301.13
Science Journal of Applied Mathematics and Statistics
2015-02-02
© Science Publishing Group
Osei Antwi
Dadzie Joseph
Louis Appiah Gyekye
Estimating Default Correlations Using Simulated Asset Values
3
1
21
21
2015-02-02
2015-02-02
10.11648/j.sjams.20150301.13
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150301.13
© Science Publishing Group
The Common Principal Component (CPC) Approach to Functional time Series (FTS) Models
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150301.14
The functional time series (FTS) models are used for analyzing, modeling and forecasting age-specific mortality rates. However, the application of these models in presence of two or more groups within similar populations needs some modification. In these cases, it is desirable for the disaggregated forecasts to be coherent with the overall forecast. The 'coherent' forecasts are the non-divergent forecasts of sub-groups within a population. Reference [1] first proposed a coherent functional model based on product and ratios of mortality rates. In this paper, we relate some of the functional time series models to the common principal components (CPC) and partial common principal components (PCPC) models introduced by [2] and provide the methods to estimate these models. We call them common functional principal component (CFPC) models and use them for coherent mortality forecasting. Here, we propose a sequential procedure based on Johansen methodology to estimate the model parameters. We use vector approach and make use of error correction models to forecast the specific time series coefficient for each sub-group.
The functional time series (FTS) models are used for analyzing, modeling and forecasting age-specific mortality rates. However, the application of these models in presence of two or more groups within similar populations needs some modification. In these cases, it is desirable for the disaggregated forecasts to be coherent with the overall forecast. The 'coherent' forecasts are the non-divergent forecasts of sub-groups within a population. Reference [1] first proposed a coherent functional model based on product and ratios of mortality rates. In this paper, we relate some of the functional time series models to the common principal components (CPC) and partial common principal components (PCPC) models introduced by [2] and provide the methods to estimate these models. We call them common functional principal component (CFPC) models and use them for coherent mortality forecasting. Here, we propose a sequential procedure based on Johansen methodology to estimate the model parameters. We use vector approach and make use of error correction models to forecast the specific time series coefficient for each sub-group.
The Common Principal Component (CPC) Approach to Functional time Series (FTS) Models
doi:10.11648/j.sjams.20150301.14
Science Journal of Applied Mathematics and Statistics
2015-02-09
© Science Publishing Group
Farah Yasmeen
The Common Principal Component (CPC) Approach to Functional time Series (FTS) Models
3
1
26
26
2015-02-09
2015-02-09
10.11648/j.sjams.20150301.14
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150301.14
© Science Publishing Group
Effects of Response Errors on Population Parameters in Double Sampling for Stratification
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150302.11
This study investigates the effects of response errors on population parameters obtained by double sampling for stratification at the interview and data processing level. Simulation study is carried out to investigate the effects of the response errors on these estimates. Finite population is generated using R-statistical package and study variables are investigated in the presence and absence of errors. The results obtained are compared. It is observed that, in the presence of response errors on the estimates results in underestimation of the population parameters.
This study investigates the effects of response errors on population parameters obtained by double sampling for stratification at the interview and data processing level. Simulation study is carried out to investigate the effects of the response errors on these estimates. Finite population is generated using R-statistical package and study variables are investigated in the presence and absence of errors. The results obtained are compared. It is observed that, in the presence of response errors on the estimates results in underestimation of the population parameters.
Effects of Response Errors on Population Parameters in Double Sampling for Stratification
doi:10.11648/j.sjams.20150302.11
Science Journal of Applied Mathematics and Statistics
2015-03-02
© Science Publishing Group
David Anekeya Alilah
Robert Keli
Harun Makwata
James Kahiri
Leo O. Odongo
Effects of Response Errors on Population Parameters in Double Sampling for Stratification
3
2
32
32
2015-03-02
2015-03-02
10.11648/j.sjams.20150302.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150302.11
© Science Publishing Group
A Study on a Tandem Stochastic Queueing Model with Parallel Phases and a Numerical Example
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150302.12
In this study a two stage queueing model is analyzed. At first stage there is a single server having exponential service time with parameter μ_1 and no waiting is allowed in front of this server. There are two parallel phase-type servers at second stage and these parallel servers have exponential service time with parameter μ_2. Arrivals to this system is Poisson with parameter λ. An arriving customer to this system has service if the server at first stage is available or leaves the system if the server is busy where the first loss occurs. After having service in first stage the customer proceeds to the second stage, if both of the phase-type parallel servers in second stage are available the customer chooses one of these servers with probability 0.50 or leaves the system if any of these servers in second stage is busy so the second loss occurs. A customer who has service at both stages leaves the system. The number of customers in this model is represented by a 3-diamensional Markov chain and Kolmogorov differential equations are obtained. After that mean number of customers and mean waiting time in the system is obtained by limit probabilities. We have shown that the customer numbers at first and second stages are dependent to each other. The numerical analysis of obtained performance measures are shown by a numeric example. Finally the graphs of loss probabilities and measure of performances given for some values of arrival rate λ and the service parameters.
In this study a two stage queueing model is analyzed. At first stage there is a single server having exponential service time with parameter μ_1 and no waiting is allowed in front of this server. There are two parallel phase-type servers at second stage and these parallel servers have exponential service time with parameter μ_2. Arrivals to this system is Poisson with parameter λ. An arriving customer to this system has service if the server at first stage is available or leaves the system if the server is busy where the first loss occurs. After having service in first stage the customer proceeds to the second stage, if both of the phase-type parallel servers in second stage are available the customer chooses one of these servers with probability 0.50 or leaves the system if any of these servers in second stage is busy so the second loss occurs. A customer who has service at both stages leaves the system. The number of customers in this model is represented by a 3-diamensional Markov chain and Kolmogorov differential equations are obtained. After that mean number of customers and mean waiting time in the system is obtained by limit probabilities. We have shown that the customer numbers at first and second stages are dependent to each other. The numerical analysis of obtained performance measures are shown by a numeric example. Finally the graphs of loss probabilities and measure of performances given for some values of arrival rate λ and the service parameters.
A Study on a Tandem Stochastic Queueing Model with Parallel Phases and a Numerical Example
doi:10.11648/j.sjams.20150302.12
Science Journal of Applied Mathematics and Statistics
2015-03-08
© Science Publishing Group
Vedat Sağlam
Erdinç Yücesoy
Murat Sağır
Müjgan Zobu
A Study on a Tandem Stochastic Queueing Model with Parallel Phases and a Numerical Example
3
2
38
38
2015-03-08
2015-03-08
10.11648/j.sjams.20150302.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150302.12
© Science Publishing Group
Determination of the Status of Utilization and Management Scenarios Bonito (Auxis rochei) Caught in the Talaud Waters North Sulawesi
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150302.13
Bonito (Auxis rochei), needs to be managed well because even as a renewable natural resource, but can undergo depletion or extinction. One of the approach in the management of fish resources is by modeling. The analysis was performed aiming to get the best estimate for the surplus production model to determine the maximum sustainable yields (MSY), utilization level, and effort levelof bonito.The data of catch and fishing effort bonito collected from the Marine and Fisheries Service of the Talaud Regency and the North Sulawesi Province. Best Surplus Production Model, which is used to assess the potential of bonito is Schaefer Model. Optimal effort (EMSY) of 8,489 trips per year, with catches of optimal CMSY 2,453.77 tons per year. The effort level for 2012 is 193.99%, which shows the inefficiency of effort, the utilization level of 94.86%, showing indicate will occur overfishing.
Bonito (Auxis rochei), needs to be managed well because even as a renewable natural resource, but can undergo depletion or extinction. One of the approach in the management of fish resources is by modeling. The analysis was performed aiming to get the best estimate for the surplus production model to determine the maximum sustainable yields (MSY), utilization level, and effort levelof bonito.The data of catch and fishing effort bonito collected from the Marine and Fisheries Service of the Talaud Regency and the North Sulawesi Province. Best Surplus Production Model, which is used to assess the potential of bonito is Schaefer Model. Optimal effort (EMSY) of 8,489 trips per year, with catches of optimal CMSY 2,453.77 tons per year. The effort level for 2012 is 193.99%, which shows the inefficiency of effort, the utilization level of 94.86%, showing indicate will occur overfishing.
Determination of the Status of Utilization and Management Scenarios Bonito (Auxis rochei) Caught in the Talaud Waters North Sulawesi
doi:10.11648/j.sjams.20150302.13
Science Journal of Applied Mathematics and Statistics
2015-03-10
© Science Publishing Group
John S. Kekenusa
Marline S. Paendong
Winsy Ch. D. Weku
Sendy B. Rondonuwu
Determination of the Status of Utilization and Management Scenarios Bonito (Auxis rochei) Caught in the Talaud Waters North Sulawesi
3
2
46
46
2015-03-10
2015-03-10
10.11648/j.sjams.20150302.13
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150302.13
© Science Publishing Group
Analysis of the Volatility of the Electricity Price in Kenya Using Autoregressive Integrated Moving Average Model
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150302.14
Electricity has proved to be a vital input to most developing economies. As the Kenyan government aims at transforming Kenya into a newly-industrialized and globally competitive, more energy is expected to be used in the commercial sector on the road to 2030. Therefore, modelling and forecasting of electricity costs in Kenya is of vital concern. In this study, the monthly costs of electricity using Autoregressive Integrated Moving Average models (ARIMA) were used so as to determine the most efficient and adequate model for analysing the volatility of the electricity cost in Kenya. Finally, the fitted ARIMA model was used to do an out-off-sample forecasting for electricity cost for September 2013 to August 2016. The forecasting values obtained indicated that the costs will rise initially but later adapt a decreasing trend. A better understanding of electricity cost trend in the small commercial sector will enhance the producers make informed decisions about their products as electricity is a major input in the sector. Also it will assist the government in making appropriate policy measures to maintain or even lowers the electricity cost.
Electricity has proved to be a vital input to most developing economies. As the Kenyan government aims at transforming Kenya into a newly-industrialized and globally competitive, more energy is expected to be used in the commercial sector on the road to 2030. Therefore, modelling and forecasting of electricity costs in Kenya is of vital concern. In this study, the monthly costs of electricity using Autoregressive Integrated Moving Average models (ARIMA) were used so as to determine the most efficient and adequate model for analysing the volatility of the electricity cost in Kenya. Finally, the fitted ARIMA model was used to do an out-off-sample forecasting for electricity cost for September 2013 to August 2016. The forecasting values obtained indicated that the costs will rise initially but later adapt a decreasing trend. A better understanding of electricity cost trend in the small commercial sector will enhance the producers make informed decisions about their products as electricity is a major input in the sector. Also it will assist the government in making appropriate policy measures to maintain or even lowers the electricity cost.
Analysis of the Volatility of the Electricity Price in Kenya Using Autoregressive Integrated Moving Average Model
doi:10.11648/j.sjams.20150302.14
Science Journal of Applied Mathematics and Statistics
2015-03-30
© Science Publishing Group
Mohammed Mustapha Wasseja
Samwel N. Mwenda
Analysis of the Volatility of the Electricity Price in Kenya Using Autoregressive Integrated Moving Average Model
3
2
57
57
2015-03-30
2015-03-30
10.11648/j.sjams.20150302.14
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150302.14
© Science Publishing Group
On Convergence a Variation of the Converse of Fabry Gap Theorem
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150302.15
In this article we give a variation of the converse of Fabry Gap theorem concerning the location of singularities of Taylor-Dirichlet series, on the boundary of convergence. whose circle of convergence is the unit circle and for which the unit circle is not the natural boundary.
In this article we give a variation of the converse of Fabry Gap theorem concerning the location of singularities of Taylor-Dirichlet series, on the boundary of convergence. whose circle of convergence is the unit circle and for which the unit circle is not the natural boundary.
On Convergence a Variation of the Converse of Fabry Gap Theorem
doi:10.11648/j.sjams.20150302.15
Science Journal of Applied Mathematics and Statistics
2015-04-03
© Science Publishing Group
Molood Gorji
Naser Abbasi
On Convergence a Variation of the Converse of Fabry Gap Theorem
3
2
62
62
2015-04-03
2015-04-03
10.11648/j.sjams.20150302.15
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150302.15
© Science Publishing Group
Fractional Dynamics of Computer Virus Propagation
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150303.11
This paper studies the fractional order model for computer virus in SEIR model. Firstly, the basic reproduction number R0, which determines the threshold of the spread of the virus is determined. The stability of equilibra was also determined and studied. The Adams-Bashforth-Moulton algorithm was employed to solve and simulate the system of differential equations. The results of the simulation depicts that by small change in α led to big change in the associated numerical results.
This paper studies the fractional order model for computer virus in SEIR model. Firstly, the basic reproduction number R0, which determines the threshold of the spread of the virus is determined. The stability of equilibra was also determined and studied. The Adams-Bashforth-Moulton algorithm was employed to solve and simulate the system of differential equations. The results of the simulation depicts that by small change in α led to big change in the associated numerical results.
Fractional Dynamics of Computer Virus Propagation
doi:10.11648/j.sjams.20150303.11
Science Journal of Applied Mathematics and Statistics
2015-04-15
© Science Publishing Group
Bonyah Ebenezer
Nyabadza Farai
Asiedu-Addo Samuel Kwesi
Fractional Dynamics of Computer Virus Propagation
3
3
69
69
2015-04-15
2015-04-15
10.11648/j.sjams.20150303.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150303.11
© Science Publishing Group
Study on Financial Market Risk Measurement Based on GJR-GARCH and FHS
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150303.12
In this paper, we establish GJR-GARCH models to extract the residuals of logarithmic returns of one kind of Chinese stock index--- Shanghai Composite Index and the series of independent and identically distribution standardized residuals is formed from the filtered model residuals and conditional volatilities from the return series with an GJR-GARCH model. The results show that from the contrast of actual value and lower limit of predicted VaR value, actual index value for 8 days breaks below the prediction lower limit. The fitting result of VaR method to the market risk of the Shanghai composite index is better than that of the Traditional Historical Simulation.
In this paper, we establish GJR-GARCH models to extract the residuals of logarithmic returns of one kind of Chinese stock index--- Shanghai Composite Index and the series of independent and identically distribution standardized residuals is formed from the filtered model residuals and conditional volatilities from the return series with an GJR-GARCH model. The results show that from the contrast of actual value and lower limit of predicted VaR value, actual index value for 8 days breaks below the prediction lower limit. The fitting result of VaR method to the market risk of the Shanghai composite index is better than that of the Traditional Historical Simulation.
Study on Financial Market Risk Measurement Based on GJR-GARCH and FHS
doi:10.11648/j.sjams.20150303.12
Science Journal of Applied Mathematics and Statistics
2015-04-28
© Science Publishing Group
Hong Zhang
Jian Guo
Li Zhou
Study on Financial Market Risk Measurement Based on GJR-GARCH and FHS
3
3
74
74
2015-04-28
2015-04-28
10.11648/j.sjams.20150303.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150303.12
© Science Publishing Group
Application of Vector Autoregressive (VAR) Process in Modelling Reshaped Seasonal Univariate Time Series
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150303.15
Seasonal Autoregressive Integrated Moving Averages (SARIMA) model has been applied in most research work to forecast seasonal univariate data. Less has been done on Vector Autoregressive (VAR) process. In this research project, seasonal univariate time series data has been used to estimate a VAR model for a reshaped seasonal univariate time series for forecasting. This was done by modeling a reshaped seasonal univariate time series data using VAR. The quarterly data is reshaped to vector form and analyzed to vector form and analyzed using VAR for the year 1959 and 1997 to fit the model and the prediction for the year 1998 is used to evaluate the prediction performance. The performance measures used include; mean square error (MSE), root mean square error (RMSE), mean percentage error (MPE), mean absolute percentage error (MAPE) and Theil’s U statistic. Forecasting future values from the fitted models in both SARIMA and VAR using Box Jenkins procedures, (Box and Jenkins; 1976) was done. The results showed that both models are appropriate in forecasting but VAR is more appropriate model than SARIMA model since its predictive performance was shown to be the best. Other data sets were also used for analysis and comparison purpose.
Seasonal Autoregressive Integrated Moving Averages (SARIMA) model has been applied in most research work to forecast seasonal univariate data. Less has been done on Vector Autoregressive (VAR) process. In this research project, seasonal univariate time series data has been used to estimate a VAR model for a reshaped seasonal univariate time series for forecasting. This was done by modeling a reshaped seasonal univariate time series data using VAR. The quarterly data is reshaped to vector form and analyzed to vector form and analyzed using VAR for the year 1959 and 1997 to fit the model and the prediction for the year 1998 is used to evaluate the prediction performance. The performance measures used include; mean square error (MSE), root mean square error (RMSE), mean percentage error (MPE), mean absolute percentage error (MAPE) and Theil’s U statistic. Forecasting future values from the fitted models in both SARIMA and VAR using Box Jenkins procedures, (Box and Jenkins; 1976) was done. The results showed that both models are appropriate in forecasting but VAR is more appropriate model than SARIMA model since its predictive performance was shown to be the best. Other data sets were also used for analysis and comparison purpose.
Application of Vector Autoregressive (VAR) Process in Modelling Reshaped Seasonal Univariate Time Series
doi:10.11648/j.sjams.20150303.15
Science Journal of Applied Mathematics and Statistics
2015-05-18
© Science Publishing Group
Chepngetich Mercy
John Kihoro
Application of Vector Autoregressive (VAR) Process in Modelling Reshaped Seasonal Univariate Time Series
3
3
135
135
2015-05-18
2015-05-18
10.11648/j.sjams.20150303.15
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150303.15
© Science Publishing Group
Hope for the Future: Overcoming the DEEP Ignorance on the CI (Confidence Intervals) and on the DOE (Design of Experiments)
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150303.14
The document shows the ideas to overcome the deep ignorance on the CI (Confidence Intervals) and on DOE (Design Of Experiments); the first part poses the problem that was originated in the RG (Research Gate): it analyses few of the answers, found in the forum, AND some wrong ideas one can find in Wikipedia; connection with the Test of Hypotheses is given; some figures are provided that make “intuitive” the concept of the Confidence Interval with the Theory (Classical Statistics). The second part considers some cases one can find in a very WWU (World Wide Used) Book: we show that high scores on documents do not prove the Quality of those documents. This paper is especially written to settle the matter for the researchers who use CI and DOE: Researchers must be alert in order to do a good job…. Many others cases should be shown: the paper should be 10 times longer; to make the paper shorter … I had to cancel pages providing the ideas on the “Scientificness”, forgotten by many people and other providing ideas misleading the readers taken from Wikipedia.
The document shows the ideas to overcome the deep ignorance on the CI (Confidence Intervals) and on DOE (Design Of Experiments); the first part poses the problem that was originated in the RG (Research Gate): it analyses few of the answers, found in the forum, AND some wrong ideas one can find in Wikipedia; connection with the Test of Hypotheses is given; some figures are provided that make “intuitive” the concept of the Confidence Interval with the Theory (Classical Statistics). The second part considers some cases one can find in a very WWU (World Wide Used) Book: we show that high scores on documents do not prove the Quality of those documents. This paper is especially written to settle the matter for the researchers who use CI and DOE: Researchers must be alert in order to do a good job…. Many others cases should be shown: the paper should be 10 times longer; to make the paper shorter … I had to cancel pages providing the ideas on the “Scientificness”, forgotten by many people and other providing ideas misleading the readers taken from Wikipedia.
Hope for the Future: Overcoming the DEEP Ignorance on the CI (Confidence Intervals) and on the DOE (Design of Experiments)
doi:10.11648/j.sjams.20150303.14
Science Journal of Applied Mathematics and Statistics
2015-05-15
© Science Publishing Group
Fausto Galetto
Hope for the Future: Overcoming the DEEP Ignorance on the CI (Confidence Intervals) and on the DOE (Design of Experiments)
3
3
123
123
2015-05-15
2015-05-15
10.11648/j.sjams.20150303.14
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150303.14
© Science Publishing Group
Empirical Research on VAR Model Based on GJR-GARCH, EVT and Copula
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150303.16
In this paper, we establish GJR-GARCH models to extract the residuals of logarithmic returns of two index--- New York stock exchange composite index (NYA) and NASDAQ. and estimate the distribution function of the residuals utilizing Gaussian kernel method and Extreme Value Theory. The kernel cumulative distribution function estimates are well suited for the interior of the distribution where most of the residuals are found and the POT method of Extreme Value Theory fits the extreme residuals in upper and lower tails well. The monte carlo technique is used to simulate the income of securities index 20000 times after we get the marginal distribution of the residual income of securities index. Secondly, By using the copula function to get the joint distribution of mthe two stock index. Lastly, According to the theory of VAR calculate VAR value of the portfolio consisting of two equal weight comprehensive index in different confidence levels.
In this paper, we establish GJR-GARCH models to extract the residuals of logarithmic returns of two index--- New York stock exchange composite index (NYA) and NASDAQ. and estimate the distribution function of the residuals utilizing Gaussian kernel method and Extreme Value Theory. The kernel cumulative distribution function estimates are well suited for the interior of the distribution where most of the residuals are found and the POT method of Extreme Value Theory fits the extreme residuals in upper and lower tails well. The monte carlo technique is used to simulate the income of securities index 20000 times after we get the marginal distribution of the residual income of securities index. Secondly, By using the copula function to get the joint distribution of mthe two stock index. Lastly, According to the theory of VAR calculate VAR value of the portfolio consisting of two equal weight comprehensive index in different confidence levels.
Empirical Research on VAR Model Based on GJR-GARCH, EVT and Copula
doi:10.11648/j.sjams.20150303.16
Science Journal of Applied Mathematics and Statistics
2015-05-23
© Science Publishing Group
Hong Zhang
Li Zhou
Shucong Ming
Yanming Yang
Mengdan Zhou
Empirical Research on VAR Model Based on GJR-GARCH, EVT and Copula
3
3
143
143
2015-05-23
2015-05-23
10.11648/j.sjams.20150303.16
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150303.16
© Science Publishing Group
Effect of Chemical Reaction on Statistical Theory of Dusty Fluid MHD Turbulent Flow for Certain Variables at Three- Point Distribution Functions
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150303.13
In this paper, an attempt is made to study the three-point distribution functions in dusty fluid MHD turbulent flow for simultaneous velocity, magnetic temperature and concentration fields in a first order chemical reaction. It has been discussed the various properties of constructed distribution functions. From beginning to end out the study, the transport equation for three-point distribution functions in dusty fluid MHD turbulent flow undergoing a first order chemical reaction has been obtained. The obtained equation is compared with the first equation of BBGKY hierarchy of equations and the closure difficulty is to be removed as in the case of ordinary turbulence.
In this paper, an attempt is made to study the three-point distribution functions in dusty fluid MHD turbulent flow for simultaneous velocity, magnetic temperature and concentration fields in a first order chemical reaction. It has been discussed the various properties of constructed distribution functions. From beginning to end out the study, the transport equation for three-point distribution functions in dusty fluid MHD turbulent flow undergoing a first order chemical reaction has been obtained. The obtained equation is compared with the first equation of BBGKY hierarchy of equations and the closure difficulty is to be removed as in the case of ordinary turbulence.
Effect of Chemical Reaction on Statistical Theory of Dusty Fluid MHD Turbulent Flow for Certain Variables at Three- Point Distribution Functions
doi:10.11648/j.sjams.20150303.13
Science Journal of Applied Mathematics and Statistics
2015-05-05
© Science Publishing Group
M. Abul Kalam Azad
M. Abu Bkar Pk
Abdul Malek
Effect of Chemical Reaction on Statistical Theory of Dusty Fluid MHD Turbulent Flow for Certain Variables at Three- Point Distribution Functions
3
3
98
98
2015-05-05
2015-05-05
10.11648/j.sjams.20150303.13
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150303.13
© Science Publishing Group
Reliability Equivalence Analysis of a Parallel-Series System Subject to Degradation Facility
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150303.19
The performance of a reliability system can be improved by different methods, e.g. the reliability of one or more components can be improved, hot or cold redundant components can be added to the system. Sometimes these measures can be equivalent as they will have the same effect on some performance measure of the system. This paper discusses the reliability equivalences of a parallel–series system. The system considered here consists of m subsystems connected in parallel, with subsystem i consisting of ni independent and identical components in series for i=1, 2, …, m. Three different methods are used to improve the system reliability: (i) the reduction method, (ii) the hot duplication method and (iii) the cold duplication method. Each component of the system has four states and two types of partial failure rates. In this study, the lifetimes of the system components are exponentially distributed. A numerical example is introduced to illustrate how the idea of this work can be applied.
The performance of a reliability system can be improved by different methods, e.g. the reliability of one or more components can be improved, hot or cold redundant components can be added to the system. Sometimes these measures can be equivalent as they will have the same effect on some performance measure of the system. This paper discusses the reliability equivalences of a parallel–series system. The system considered here consists of m subsystems connected in parallel, with subsystem i consisting of ni independent and identical components in series for i=1, 2, …, m. Three different methods are used to improve the system reliability: (i) the reduction method, (ii) the hot duplication method and (iii) the cold duplication method. Each component of the system has four states and two types of partial failure rates. In this study, the lifetimes of the system components are exponentially distributed. A numerical example is introduced to illustrate how the idea of this work can be applied.
Reliability Equivalence Analysis of a Parallel-Series System Subject to Degradation Facility
doi:10.11648/j.sjams.20150303.19
Science Journal of Applied Mathematics and Statistics
2015-06-08
© Science Publishing Group
M. A. El-Damcese
Reliability Equivalence Analysis of a Parallel-Series System Subject to Degradation Facility
3
3
164
164
2015-06-08
2015-06-08
10.11648/j.sjams.20150303.19
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150303.19
© Science Publishing Group
Research on Logistics Demand Forecasting and Transportation Structure of Beijing Based on Grey Prediction Model
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150303.17
This paper analyzes the present situation of Beijing logistics development, starting from the total economic output, economic structure, economic location and other aspects, basic economic situation of Beijing is analyzed. From the transportation infrastructure construction present situation, the current status of development of logistics industry, logistics enterprises are analyzed in terms of status and problems of Beijing logistics development; then further analysis of Beijing logistics development environment, all of these indicate that it is very necessary for Beijing logistics demand forecast. Using the econometric model to analyze and forecast the total demand analysis of Beijing logistics, discusses the influencing factors of Beijing logistics demand, thus the construction of index system of logistics demand forecast, and selects freight, freight turnover as a quantitative index to measure the total quantity of logistics demand, using the Eviews model and the analysis, obtains the Beijing logistics demand presents the fast growth to the situation in the future five years.
This paper analyzes the present situation of Beijing logistics development, starting from the total economic output, economic structure, economic location and other aspects, basic economic situation of Beijing is analyzed. From the transportation infrastructure construction present situation, the current status of development of logistics industry, logistics enterprises are analyzed in terms of status and problems of Beijing logistics development; then further analysis of Beijing logistics development environment, all of these indicate that it is very necessary for Beijing logistics demand forecast. Using the econometric model to analyze and forecast the total demand analysis of Beijing logistics, discusses the influencing factors of Beijing logistics demand, thus the construction of index system of logistics demand forecast, and selects freight, freight turnover as a quantitative index to measure the total quantity of logistics demand, using the Eviews model and the analysis, obtains the Beijing logistics demand presents the fast growth to the situation in the future five years.
Research on Logistics Demand Forecasting and Transportation Structure of Beijing Based on Grey Prediction Model
doi:10.11648/j.sjams.20150303.17
Science Journal of Applied Mathematics and Statistics
2015-06-02
© Science Publishing Group
Jie Zhu
Hong Zhang
Li Zhou
Research on Logistics Demand Forecasting and Transportation Structure of Beijing Based on Grey Prediction Model
3
3
152
152
2015-06-02
2015-06-02
10.11648/j.sjams.20150303.17
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150303.17
© Science Publishing Group
Simulation of Heterogeneous Financial Market Model Based on Cellular Automaton
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150303.18
In recent years, researchers analyzed the historical data from the financial markets. They found that the statistical result is different from the classical financial theories, models, and methods. The difference is challenging the three hypotheses which are rational people hypothesis, efficient market hypothesis and random walk hypothesis. We need new perspective and tools to re-study the financial market as a complex system. A cellular automata based heterogeneous financial market model is proposed in this categories which dissertation. In this model, the market participant id divided in to two is the fundamentalists and chartists. A learn rules is used to make sure all the market participant can convert in these two categories. The method emulates the interact behaviors between the market participants, and emulates the overall market behavior. The author analyzes the randomness sources, mean-reverting property, bubble happen and bust, and stationary of this model. The author analyzes the relationships between cellular automata based heterogeneous financial market model and the Ornstein-Uhlenbeck model and GARCH models. The data simulated by the financial market model is fit the characteristics such as the fat tail of return's distribution, negative skewness, relationship between return and trading volume, the randomness of volatility, and volatility cluster, which the classical theory is failed to explain. How to add more heterogeneity into the model is discussed in this dissertation. In this dissertation, by using the cellular automata as a tool, an option pricing model and a heterogeneous financial market model are proposed. The result of the option pricing model is close to the result calculated by the formula. The simulation of heterogeneous financial market model can explain many phenomenons which can not be explained by the classical theory, such as the fat-tail of return and the bubble happen and bust. The author also preliminary designs the financial market model based on the asynchronous cellular automata. These models and conclusions indicate that cellular automata have a ability to show the randomness of the financial markets and simulate the behaves of the participants in the financial maket.
In recent years, researchers analyzed the historical data from the financial markets. They found that the statistical result is different from the classical financial theories, models, and methods. The difference is challenging the three hypotheses which are rational people hypothesis, efficient market hypothesis and random walk hypothesis. We need new perspective and tools to re-study the financial market as a complex system. A cellular automata based heterogeneous financial market model is proposed in this categories which dissertation. In this model, the market participant id divided in to two is the fundamentalists and chartists. A learn rules is used to make sure all the market participant can convert in these two categories. The method emulates the interact behaviors between the market participants, and emulates the overall market behavior. The author analyzes the randomness sources, mean-reverting property, bubble happen and bust, and stationary of this model. The author analyzes the relationships between cellular automata based heterogeneous financial market model and the Ornstein-Uhlenbeck model and GARCH models. The data simulated by the financial market model is fit the characteristics such as the fat tail of return's distribution, negative skewness, relationship between return and trading volume, the randomness of volatility, and volatility cluster, which the classical theory is failed to explain. How to add more heterogeneity into the model is discussed in this dissertation. In this dissertation, by using the cellular automata as a tool, an option pricing model and a heterogeneous financial market model are proposed. The result of the option pricing model is close to the result calculated by the formula. The simulation of heterogeneous financial market model can explain many phenomenons which can not be explained by the classical theory, such as the fat-tail of return and the bubble happen and bust. The author also preliminary designs the financial market model based on the asynchronous cellular automata. These models and conclusions indicate that cellular automata have a ability to show the randomness of the financial markets and simulate the behaves of the participants in the financial maket.
Simulation of Heterogeneous Financial Market Model Based on Cellular Automaton
doi:10.11648/j.sjams.20150303.18
Science Journal of Applied Mathematics and Statistics
2015-06-02
© Science Publishing Group
Hong Zhang
Li Zhou
Yifan Yang
Lu Qiu
Simulation of Heterogeneous Financial Market Model Based on Cellular Automaton
3
3
159
159
2015-06-02
2015-06-02
10.11648/j.sjams.20150303.18
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150303.18
© Science Publishing Group
Multivariate Approach to Partial Correlation Analysis
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150303.20
Multivariate approach to generate variance covariance and partial correlation coefficients of one or more independent variables has been the concern of advanced statisticians and users of statistical tools. This work tackled the problem by keeping one or some variables constant and partitioned the variance covariance matrices to find multivariate partial correlations. Due to the challenges that faced the analysis and computation of complex variables, this research used matrix to ascertain the level of relationship that exist among these variables and obtained correlation coefficients from variance covariance matrices. It was proved that partial correlation coefficients are diagonal matrices that are normally distributed. (Work count = 101).
Multivariate approach to generate variance covariance and partial correlation coefficients of one or more independent variables has been the concern of advanced statisticians and users of statistical tools. This work tackled the problem by keeping one or some variables constant and partitioned the variance covariance matrices to find multivariate partial correlations. Due to the challenges that faced the analysis and computation of complex variables, this research used matrix to ascertain the level of relationship that exist among these variables and obtained correlation coefficients from variance covariance matrices. It was proved that partial correlation coefficients are diagonal matrices that are normally distributed. (Work count = 101).
Multivariate Approach to Partial Correlation Analysis
doi:10.11648/j.sjams.20150303.20
Science Journal of Applied Mathematics and Statistics
2015-06-12
© Science Publishing Group
Onyeneke Casmir Chidiebere
Multivariate Approach to Partial Correlation Analysis
3
3
170
170
2015-06-12
2015-06-12
10.11648/j.sjams.20150303.20
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150303.20
© Science Publishing Group
Multivariate Outlier Detection Using Independent Component Analysis
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150304.11
The recent developments by considering a rather unexpected application of the theory of Independent component analysis (ICA) found in outlier detection, data clustering and multivariate data visualization etc. Accurate identification of outliers plays an important role in statistical analysis. If classical statistical models are blindly applied to data containing outliers, the results can be misleading at best. In addition, outliers themselves are often the special points of interest in many practical situations and their identification is the main purpose of the investigation. This paper takes an attempt a new and novel method formultivariate outlier detection using ICA and compares with different outlier detection techniques in the literature.
The recent developments by considering a rather unexpected application of the theory of Independent component analysis (ICA) found in outlier detection, data clustering and multivariate data visualization etc. Accurate identification of outliers plays an important role in statistical analysis. If classical statistical models are blindly applied to data containing outliers, the results can be misleading at best. In addition, outliers themselves are often the special points of interest in many practical situations and their identification is the main purpose of the investigation. This paper takes an attempt a new and novel method formultivariate outlier detection using ICA and compares with different outlier detection techniques in the literature.
Multivariate Outlier Detection Using Independent Component Analysis
doi:10.11648/j.sjams.20150304.11
Science Journal of Applied Mathematics and Statistics
2015-06-19
© Science Publishing Group
Md. Shamim Reza
Sabba Ruhi
Multivariate Outlier Detection Using Independent Component Analysis
3
4
176
176
2015-06-19
2015-06-19
10.11648/j.sjams.20150304.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150304.11
© Science Publishing Group
Solving Linear Time Varying Systems by Orthonormal Bernstein Polynomials
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150304.15
In this paper we present a method to find the solution of time-varying systems using orthonormal Bernstein polynomials. The method is based upon expanding various time functions in the system as their truncated orthonormal Bernstein polynomials. Operational matrix of integration is presented and is utilized to reduce the solution of time-varying systems to the solution of algebraic equations. Illustrative examples are included to demonstrate the validity and applicability of the technique.
In this paper we present a method to find the solution of time-varying systems using orthonormal Bernstein polynomials. The method is based upon expanding various time functions in the system as their truncated orthonormal Bernstein polynomials. Operational matrix of integration is presented and is utilized to reduce the solution of time-varying systems to the solution of algebraic equations. Illustrative examples are included to demonstrate the validity and applicability of the technique.
Solving Linear Time Varying Systems by Orthonormal Bernstein Polynomials
doi:10.11648/j.sjams.20150304.15
Science Journal of Applied Mathematics and Statistics
2015-07-29
© Science Publishing Group
Mahmood Dadkhah
Solving Linear Time Varying Systems by Orthonormal Bernstein Polynomials
3
4
198
198
2015-07-29
2015-07-29
10.11648/j.sjams.20150304.15
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150304.15
© Science Publishing Group
Conversely Convergence Theorem of Fabry Gap
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150304.12
Our previous paper conducted to prove a variation of the converse of Fabry Gap theorem concerning the location of singularities of Taylor-Dirichlet series, on the boundary of convergence. In the present paper, we prove conversely convergence theorem of Fabry Gap. This is another proof of Fabry Gap theorem. This prove may be of interest in itself.
Our previous paper conducted to prove a variation of the converse of Fabry Gap theorem concerning the location of singularities of Taylor-Dirichlet series, on the boundary of convergence. In the present paper, we prove conversely convergence theorem of Fabry Gap. This is another proof of Fabry Gap theorem. This prove may be of interest in itself.
Conversely Convergence Theorem of Fabry Gap
doi:10.11648/j.sjams.20150304.12
Science Journal of Applied Mathematics and Statistics
2015-06-25
© Science Publishing Group
Naser Abbasi
Molood Gorji
Conversely Convergence Theorem of Fabry Gap
3
4
183
183
2015-06-25
2015-06-25
10.11648/j.sjams.20150304.12
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150304.12
© Science Publishing Group
The Application of ARIMA Model in 2014 Shanghai Composite Stock Price Index
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150304.16
In order to study the changes of Shanghai Composite Stock Price Index (SCSPI) and predict the trend of stock market fluctuations, this paper constructed a time-series analysis．A non-stationary trend is found, and an ARIMA model is found to sufficiently model the data. A short trend of Shanghai composite stock price index is then predicted using the established model.
In order to study the changes of Shanghai Composite Stock Price Index (SCSPI) and predict the trend of stock market fluctuations, this paper constructed a time-series analysis．A non-stationary trend is found, and an ARIMA model is found to sufficiently model the data. A short trend of Shanghai composite stock price index is then predicted using the established model.
The Application of ARIMA Model in 2014 Shanghai Composite Stock Price Index
doi:10.11648/j.sjams.20150304.16
Science Journal of Applied Mathematics and Statistics
2015-08-05
© Science Publishing Group
Renhao Jin
Sha Wang
Fang Yan
Jie Zhu
The Application of ARIMA Model in 2014 Shanghai Composite Stock Price Index
3
4
203
203
2015-08-05
2015-08-05
10.11648/j.sjams.20150304.16
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150304.16
© Science Publishing Group
Modeling the Effects of Time Delay on HIV-1 in Vivo Dynamics in the Presence of ARVs
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150304.17
Mathematical models to describe in vivo and in vitro immunological response to infection in humans by HIV-1 have been of major concern due to the rich variety of parameters affecting its dynamics. In this paper, HIV-1 in vivo dynamics is studied to predict and describe its evolutions in presence of ARVs using delay differential equations. The delay is used to account for the latent period of time that elapsed between HIV – CD4<sup>+</sup> T cell binding (infection) and production of infectious virus from this host cell. The model uses four variables: healthy CD4<sup>+</sup>T-cells (T), infected CD4<sup>+</sup>T-cells (T<sup>*</sup>), infectious virus (V<sub>I</sub>) and noninfectious virus (V<sub>N</sub>). Of importance is effect of time delay and drug efficacy on stability of disease free and endemic equilibrium points. Analytical results showed that DFE is stable for all τ>0. On the other hand, there is a critical value of delay τ<sub>1</sub>>0, such that for all τ>τ<sub>1</sub>, the EEP is stable but unstable forτ<τ<sub>1</sub>. The critical value of delayτ<sub>1</sub> is the bifurcation value where the HIV-1 in vivo dynamics undergoes a Hopf-bifurcation. This stability in both equilibria is achieved only if the drug efficacy 0≤ε≤1 is above a threshold value of ε_c. Numerical simulations show that this stability is achieved at the drug efficacy of ε<sub>c</sub>=0.59 and time delay of τ<sub>1</sub>=0.65 days. This ratifies the fact that if CD4<sup>+</sup>T cells remain inactive for long periods of time τ>τ<sub>1</sub> the HIV-1 viral materials will not be reproduced, and the immune system together with treatment will have enough time to clear the viral materials in the blood and thus the EEP is maintain.
Mathematical models to describe in vivo and in vitro immunological response to infection in humans by HIV-1 have been of major concern due to the rich variety of parameters affecting its dynamics. In this paper, HIV-1 in vivo dynamics is studied to predict and describe its evolutions in presence of ARVs using delay differential equations. The delay is used to account for the latent period of time that elapsed between HIV – CD4<sup>+</sup> T cell binding (infection) and production of infectious virus from this host cell. The model uses four variables: healthy CD4<sup>+</sup>T-cells (T), infected CD4<sup>+</sup>T-cells (T<sup>*</sup>), infectious virus (V<sub>I</sub>) and noninfectious virus (V<sub>N</sub>). Of importance is effect of time delay and drug efficacy on stability of disease free and endemic equilibrium points. Analytical results showed that DFE is stable for all τ>0. On the other hand, there is a critical value of delay τ<sub>1</sub>>0, such that for all τ>τ<sub>1</sub>, the EEP is stable but unstable forτ<τ<sub>1</sub>. The critical value of delayτ<sub>1</sub> is the bifurcation value where the HIV-1 in vivo dynamics undergoes a Hopf-bifurcation. This stability in both equilibria is achieved only if the drug efficacy 0≤ε≤1 is above a threshold value of ε_c. Numerical simulations show that this stability is achieved at the drug efficacy of ε<sub>c</sub>=0.59 and time delay of τ<sub>1</sub>=0.65 days. This ratifies the fact that if CD4<sup>+</sup>T cells remain inactive for long periods of time τ>τ<sub>1</sub> the HIV-1 viral materials will not be reproduced, and the immune system together with treatment will have enough time to clear the viral materials in the blood and thus the EEP is maintain.
Modeling the Effects of Time Delay on HIV-1 in Vivo Dynamics in the Presence of ARVs
doi:10.11648/j.sjams.20150304.17
Science Journal of Applied Mathematics and Statistics
2015-08-14
© Science Publishing Group
Kirui Wesley
Rotich Kiplimo Titus
Bitok Jacob
Lagat Cheruiyot Robert
Modeling the Effects of Time Delay on HIV-1 in Vivo Dynamics in the Presence of ARVs
3
4
213
213
2015-08-14
2015-08-14
10.11648/j.sjams.20150304.17
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150304.17
© Science Publishing Group
On One 3-Dimensional Boundary-Value Problem with Inclined Derivatives
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150304.14
A boundary-value problem with inclined derivatives in 3-dimensional space with the boundaries – surfaces of Liapunov type is considered in the paper. The method of investigation is based on the necessary conditions. The advantage compared to the theory of potentials is that we don‘t have limit passage, we use boundary values which are obtained from the principal relationships called necessary conditions. Remark that the directions of the derivatives given in the boundary conditions are arbitrary. Tangent directions may be some subset of the given directions.
A boundary-value problem with inclined derivatives in 3-dimensional space with the boundaries – surfaces of Liapunov type is considered in the paper. The method of investigation is based on the necessary conditions. The advantage compared to the theory of potentials is that we don‘t have limit passage, we use boundary values which are obtained from the principal relationships called necessary conditions. Remark that the directions of the derivatives given in the boundary conditions are arbitrary. Tangent directions may be some subset of the given directions.
On One 3-Dimensional Boundary-Value Problem with Inclined Derivatives
doi:10.11648/j.sjams.20150304.14
Science Journal of Applied Mathematics and Statistics
2015-07-07
© Science Publishing Group
Mekhtiyev Magomed Farman
Aliyev Nihan Alipanah
Fomina Nina Ilyinichna
On One 3-Dimensional Boundary-Value Problem with Inclined Derivatives
3
4
193
193
2015-07-07
2015-07-07
10.11648/j.sjams.20150304.14
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150304.14
© Science Publishing Group
Detection of Non-Linearity in the Time Series Using BDS Test
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150304.13
The need to determine the status of the series is a very important issue that must be addressed before embarking on the statistical analysis of such series; this paper therefore, examines the status of the commercial bank savings in Nigeria. From the analysis we discovered that at level the series was not stationary as shown in figure 1, however at the first difference (figure 2) the series was stationary, so also the unit root test applied shows that at level the series was not stationary (table 1) but at first difference it was stationary (table 2) and this actually paved way for the application of Brock- Dechert-Scheinkman (table 3) test which actually revealed that this series could be best estimated by the use of non-linear model as the null hypothesis of linearity of the series was out rightly rejected and the alternative was accepted. The importance of this result lies on the fact that it guides against model misspecification as using linear model to estimate the parameter of the non-linear model will result in model judgmental error.
The need to determine the status of the series is a very important issue that must be addressed before embarking on the statistical analysis of such series; this paper therefore, examines the status of the commercial bank savings in Nigeria. From the analysis we discovered that at level the series was not stationary as shown in figure 1, however at the first difference (figure 2) the series was stationary, so also the unit root test applied shows that at level the series was not stationary (table 1) but at first difference it was stationary (table 2) and this actually paved way for the application of Brock- Dechert-Scheinkman (table 3) test which actually revealed that this series could be best estimated by the use of non-linear model as the null hypothesis of linearity of the series was out rightly rejected and the alternative was accepted. The importance of this result lies on the fact that it guides against model misspecification as using linear model to estimate the parameter of the non-linear model will result in model judgmental error.
Detection of Non-Linearity in the Time Series Using BDS Test
doi:10.11648/j.sjams.20150304.13
Science Journal of Applied Mathematics and Statistics
2015-07-06
© Science Publishing Group
Akintunde, M. O.
Oyekunle, J. O.
Olalude G. A.
Detection of Non-Linearity in the Time Series Using BDS Test
3
4
187
187
2015-07-06
2015-07-06
10.11648/j.sjams.20150304.13
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150304.13
© Science Publishing Group
Spline Regression in the Estimation of the Finite Population Total
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150305.11
This study sought to estimate finite population total using Spline regression function. It compared the Spline regression with Sample Mean estimator, design-based and model - based estimators. To measure the performance of each estimator, the study considered average bias, the efficiency by use of the mean square error and the robustness using the rate change of efficiency. In this research, five populations were used. Three of them were simulated according to the following models: linear homoscedastic, quadratic homoscedastic and linear heteroscedastic and two natural populations. The performances of the five estimators were studied under the five populations. The sudy found that Sample Mean(SM), Horvitz-Thompson (HT) and Ratio (R) estimators are not robust while Nadaraya-Watson(NW) and Periodic Spline(PS) are robust when linearity and homoscedasticity of the population structure are violated.
This study sought to estimate finite population total using Spline regression function. It compared the Spline regression with Sample Mean estimator, design-based and model - based estimators. To measure the performance of each estimator, the study considered average bias, the efficiency by use of the mean square error and the robustness using the rate change of efficiency. In this research, five populations were used. Three of them were simulated according to the following models: linear homoscedastic, quadratic homoscedastic and linear heteroscedastic and two natural populations. The performances of the five estimators were studied under the five populations. The sudy found that Sample Mean(SM), Horvitz-Thompson (HT) and Ratio (R) estimators are not robust while Nadaraya-Watson(NW) and Periodic Spline(PS) are robust when linearity and homoscedasticity of the population structure are violated.
Spline Regression in the Estimation of the Finite Population Total
doi:10.11648/j.sjams.20150305.11
Science Journal of Applied Mathematics and Statistics
2015-09-02
© Science Publishing Group
Joseph Kipyegon Cheruiyot
Spline Regression in the Estimation of the Finite Population Total
3
5
224
224
2015-09-02
2015-09-02
10.11648/j.sjams.20150305.11
http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=149&doi=10.11648/j.sjams.20150305.11
© Science Publishing Group