3rd Sept 2024 Shift 1:
| Examination: | UGC NET |
| Subject: | COMMERCE (Paper 2) |
| Exam cycle: | 3rd Sept 2024 Shift 1 |
| Types of Paper: | PYQ’s (Previous Year Questions) |
| Which Unit? | Unit 5 Business Statistics and Research Methods |
Question No.1
If two regression lines are: 8x – 10y + 66 = 0 and 40x – 18y = 214, then X̅ & Y̅ are respectively
- 13, 14
- 16, 15
- 14, 13
- 13, 17
Solutions:
The correct answer is 13, 17.
Key Points
- Explanation of Regression Lines:
- The given regression lines are: 8x – 10y + 66 = 0 and 40x – 18y = 214.
- The intersection of these lines represents the means (X̅ and Y̅) of the variables.
- Finding the Point of Intersection:
- To find X̅ and Y̅, solve the two equations simultaneously:
- Simplify the first equation: 8x – 10y = -66
- Simplify the second equation: 40x – 18y = 214
- Multiply the first equation by 2 to align y coefficients: 16x – 20y = -132
- Solve the system of equations:
- 16x – 20y = -132
- 40x – 18y = 214
- Subtract the two equations to eliminate y:
- (16x – 20y) – (40x – 18y) = -132 – 214
- -24x – 2y = -346
- Solve for y: y = 17
- Substitute y = 17 back into one of the original equations to solve for x: x = 13
- Thus, the means are X̅ = 13 and Y̅ = 17.
Question No.2
An exploratory study is finished when the researcher has achieved the following:
A. Established the major dimensions of the research task
B. Defined a set of subsidiary investigative questions that can be used as guides to a detailed research design
C. Developed several hypotheses about possible causes of a management dilemma
D. Learned that certain other hypothesis are such remote possibilities that they can be safely ignored in any subsequent study
E. Concluded additional research is needed and it is feasible
Choose the correct answer from the options given below:
- A & B only
- A, B & C only
- A, B, C & D only
- B, C, D & E only
Solutions:
The correct answer is A, B, C & D only.
Key Points
- Established the major dimensions of the research task (A):
- This involves identifying the key areas and boundaries of the research topic.
- It helps in setting a clear scope for the study and ensures that the research is focused and manageable.
- Defined a set of subsidiary investigative questions that can be used as guides to a detailed research design (B):
- These questions break down the main research question into smaller, more manageable parts.
- They provide a roadmap for the detailed research design and help in addressing specific aspects of the research problem.
- Developed several hypotheses about possible causes of a management dilemma (C):
- Hypotheses are tentative explanations that can be tested through further research.
- In an exploratory study, developing hypotheses helps in understanding potential reasons behind a management issue.
- Learned that certain other hypotheses are such remote possibilities that they can be safely ignored in any subsequent study (D):
- This involves identifying and ruling out unlikely explanations.
- It streamlines the research process by focusing on more plausible hypotheses.
Additional Information
- Exploratory studies are typically conducted when the researcher has a limited understanding of the topic and seeks to gain insights.
- They are often the initial phase of a larger research project, setting the stage for more detailed and structured research.
- Methods used in exploratory studies include literature reviews, expert interviews, and focus groups.
- The main goal is to generate ideas, identify key issues, and establish a foundation for future research.
Question No.3
Arrange the steps of Sampling Design in a form of questions that are to be answered in securing a sample –
A. What is the appropriate sampling method?
B. What are the parameters of interest?
C. What size sample is needed?
D. What is the target population?
E. What is the sampling frame?
Choose the correct answer from the options given below:
- E, D, B, A, C
- B, E, A, C, D
- C, B, A, E, D
- D, B, E, A, C
Solutions:
The correct answer is D, B, E, A, C.
Key Points
- Defining the target population (D):
- This is the first step where you identify the group of individuals or items you are interested in studying.
- It forms the basis for the entire sampling process as it sets the boundaries of the study.
- Identifying the parameters of interest (B):
- Next, you determine what specific characteristics or metrics you want to measure within the target population.
- These parameters guide the selection process and ensure that the sample will be relevant to your study objectives.
- Selecting the sampling frame (E):
- In this step, you identify a list or database from which the sample will be drawn.
- The sampling frame should accurately represent the target population to avoid selection bias.
- Choosing the sampling method (A):
- Here, you decide on the technique to be used for selecting the sample, such as random sampling, stratified sampling, etc.
- The choice of method affects the accuracy and generalizability of the results.
- Determining the sample size (C):
- The final step involves calculating the number of subjects or units to be included in the sample.
- Sample size affects the reliability and validity of the study results, making it a crucial consideration.
Question No.4
Data preparation needs to ensure the accuracy of the data and their conversion from raw form to reduced and classified forms for analysis. It includes which of the following:
A. Coding
B. Data Entry
C. Editing
D. Steming
E. Aliasing
Choose the correct answer from the options given below:
- A & C only
- B & D only
- A, B & C only
- C, D & E only
Solutions:
The correct answer is A, B, & C only.
Key Points
- Coding (A):
- Coding is the process of converting qualitative data into quantitative form by assigning numerical or other symbols to answers so that responses can be put into a limited number of categories or classes. This is essential for simplifying data handling and analysis.
- This step helps in organizing data into systematic categories, making it easier to analyze and draw conclusions. For instance, responses to open-ended questions can be coded into themes for easier analysis.
- Examples of coding include assigning a number to represent a gender (1 for male, 2 for female) or using numerical codes to categorize survey responses (1 for “Strongly Agree”, 2 for “Agree”, etc.).
- Data Entry (B):
- Data entry is the act of transcribing information from surveys, interviews, or other data collection instruments into a database or spreadsheet. This is a critical step to ensure that data is captured accurately and can be used for analysis.
- The accuracy of data entry is paramount as any errors can lead to incorrect analysis and results. Various techniques such as double data entry (entering the same data twice and cross-checking) and automated tools can help minimize errors.
- An example of data entry is inputting survey responses into an Excel file or a statistical analysis software like SPSS or SAS. Each response is recorded accurately in the corresponding fields.
- Editing (C):
- Editing involves reviewing and correcting collected data to identify and fix errors or inconsistencies. This ensures that the data set is clean, reliable, and suitable for accurate analysis.
- Editing can include checking for missing values, identifying outliers, correcting typographical errors, and ensuring consistency in data entry. For example, ensuring that all dates are in the same format or correcting entries that are out of range for a given variable.
- Tools for data editing range from simple spreadsheet functions to more sophisticated data validation and cleaning tools provided by statistical software like R or Python’s Pandas library.
Additional Information
- Stemming (D):
- Stemming is the process of reducing words to their root form. For example, the words “running”, “runner”, and “ran” can be reduced to their root form “run”. It is commonly used in natural language processing and text analysis to improve the consistency and efficiency of the analysis.
- While stemming is important in text data preprocessing, it is not typically involved in general data preparation processes that focus on structuring and cleaning raw data for analysis.
- Aliasing (E):
- Aliasing in the context of data refers to the effect that occurs when continuous signals are sampled and reconstructed inaccurately, leading to a distortion. In data science, aliasing is often a consideration in signal processing and time series analysis.
- Aliasing is not generally a part of standard data preparation tasks, which focus on ensuring data accuracy, cleaning data, and converting raw data into usable forms for analysis.
Question No.5
For test hypothesis H0 : μ1 ≤ μ2 and H1 : μ1 > μ2, the critical region (Z) at ∝ = 0.10 and n > 30 will be
- Z ≤ 1.96
- Z > 1.96
- Z > 1.645
- Z ≤ -1.645
Solutions:
The correct answer is Z ≤ -1.645
Key Points
- Hypothesis Testing in Context of Z-Test:
- In hypothesis testing, we test an assumption (H0) against an alternative (H1). Here, H0: μ1 ≤ μ2 represents that the mean of population 1 is less than or equal to that of population 2, while H1: μ1 > μ2 represents that the mean of population 1 is greater than that of population 2.
- The decision to reject H0 in favor of H1 is based on the computed Z-value falling in the critical region, determined by a significance level (α).
- Significance Level and Critical Value (α = 0.10):
- At α = 0.10, we allow for a 10% probability of incorrectly rejecting H0 (Type I error). This implies the critical value for a one-tailed test is -1.645, indicating the threshold beyond which H0 is rejected.
- If the computed Z-value is less than or equal to -1.645, we reject H0, thus supporting H1: μ1 > μ2.
- Application in Financial Enterprise Decisions:
- In financial enterprises, such hypothesis testing is often used in performance comparisons. For example, it could test whether a new investment strategy (μ1) yields higher returns than an established benchmark (μ2).
- If the test result falls in the critical region, it may justify shifting funds toward the new strategy, as it shows statistically significant potential for higher returns.
Additional Information
- Two-Tailed vs. One-Tailed Tests:
- A one-tailed test, as used here, is applied when the direction of the effect is hypothesized (e.g., μ1 > μ2), whereas a two-tailed test assesses for any significant difference regardless of direction.
- Z Critical Values:
- Commonly used Z critical values include ±1.96 (α = 0.05, two-tailed) and ±1.645 (α = 0.10, one-tailed). These values represent standard benchmarks for assessing statistical significance.
Question No.6
A person applies for a loan of ₹1,00,000. The bank informed him that over the years, it had received 2920 loan applications per year and the probability of approval was, on an average 0.85. The applicant wants to know the average number of loans approved per year by the bank. What can be that number?
- 1920
- 3250
- 1000
- 2482
Solutions:
The correct answer is 2482
Key Points
Explanation of Correct and Incorrect Options:
- Option 1 (1920):
- This option is incorrect.
- The average number of loans approved per year is calculated by multiplying the total number of loan applications by the probability of approval.
- 1920 is much lower than the expected value calculated by using the given data.
- The correct calculation should be 2920 * 0.85 = 2482.
- Option 2 (3250):
- This option is incorrect.
- The number 3250 is higher than the total number of applications received per year (2920).
- It is impossible to approve more loans than the number of applications received.
- The correct value is 2920 * 0.85 = 2482.
- Option 3 (1000):
- This option is incorrect.
- The number 1000 is much lower than the expected value.
- Using the given data, the average number of loans approved should be 2920 * 0.85 = 2482.
- 1000 does not match with this calculation.
- Option 4 (2482):
- This option is correct.
- The average number of loans approved per year is found by multiplying the number of applications (2920) by the probability of approval (0.85).
- 2920 * 0.85 = 2482.
- This matches the given data and calculation perfectly.
Question No.7
Which one of the following is standard deviation of first 7 (1 to 7) natural numbers?
- 4
- 3
- 2
- 6
Solutions:
The correct answer is standard deviation of the first 7 natural numbers is 2.
Key Points
- The standard deviation provides insight into the volatility or risk associated with a set of numbers, which can be applied to understanding financial data:
- In finance, standard deviation is used to measure the risk associated with an investment’s return. A higher standard deviation indicates a higher risk, as the returns are more spread out from the expected value.
- For the first 7 natural numbers (1 to 7), the calculation of standard deviation provides a basic statistical insight, useful for understanding larger financial datasets.
- The mean (average) of the first 7 natural numbers is 4. Thus, each number’s deviation from the mean is considered to calculate the standard deviation.
- Considering standard deviation helps financial analysts to forecast future performance and assess the historical volatility of financial instruments.
Additional Information
- Steps to calculate the standard deviation of the first 7 natural numbers (1 to 7):
- Calculate the mean (average): (1+2+3+4+5+6+7)/7 = 4.
- Find the deviations from the mean for each number: (-3, -2, -1, 0, 1, 2, 3).
- Square each deviation: (9, 4, 1, 0, 1, 4, 9).
- Calculate the average of these squared deviations: (9+4+1+0+1+4+9)/7 = 4.
- The square root of this average gives the standard deviation: sqrt(4) = 2.
- Applications in Financial Enterprise:
- In the financial enterprise, standard deviation is extensively used to measure the risk associated with stock prices, portfolio returns, and investment performance.
- Risk management relies on understanding the standard deviation of returns to gauge the potential volatility and make informed decisions.
- Financial models and tools like the Sharpe ratio use standard deviation to adjust the average return for the risk taken, enhancing investment strategy assessments.
Question No.8
Match the List-I with List-II
| LIST I Shapes | LIST II Type of Distribution | ||
| A. | I. | Platykurtic Distribution | |
| B. | II. | Positively Skewed | |
| C. | III. | Negatively Skewed | |
| D. | IV. | Leptokurtic Distribution |
Choose the correct answer from the options given below.
- A – III, B – IV, C – I, D – II
- A – II, B – IV, C – I, D – III
- A – III, B – I, C – IV, D – II
- A – IV, B – I, C – III, D – II
Solutions:
The correct answer is A-III, B-II, C-I, D-IV.
Key Points
- Platykurtic Distribution (A) matches with Flatter peak than the normal distribution (III).
- A Platykurtic distribution is characterized by having a flatter peak compared to the normal distribution.
- This type of distribution indicates fewer extreme values (outliers) and a more uniform spread of values.
- Examples include certain types of uniform distributions where data points are spread more evenly across the range.
- Positively Skewed (B) matches with Long tail on the right side (II).
- A positively skewed distribution has a longer tail on the right side, indicating more extreme high values.
- Commonly seen in income distributions where most people earn less but a few earn significantly more.
- This type of skewness suggests that the mean is greater than the median.
- Negatively Skewed (C) matches with Long tail on the left side (I).
- A negatively skewed distribution has a longer tail on the left side, indicating more extreme low values.
- Seen in distributions such as test scores where a majority perform well but a few perform poorly.
- This type of skewness suggests that the mean is less than the median.
- Leptokurtic Distribution (D) matches with Sharper peak than the normal distribution (IV).
- A Leptokurtic distribution is characterized by a sharper peak compared to the normal distribution.
- This type of distribution indicates more extreme values (outliers) and a higher likelihood of values being close to the mean.
- Examples include certain types of distributions where data points are clustered tightly around the mean.
Additional Information
- Understanding the shape and type of distribution is crucial in statistical analysis as it affects the choice of statistical tests and interpretations of results.
- Skewness and kurtosis are measures used to describe the shape of the distribution of data points in a dataset.
- Skewness measures the asymmetry of the distribution, while kurtosis measures the “tailedness” or the sharpness of the distribution’s peak.
- These concepts are fundamental in fields such as economics, psychology, and various branches of science where data distribution plays a key role in analysis.
Question No.9
Which one of the following selection tests answers the question “Does this test measure what it’s supposed to measure”?
- Content validity
- Criterion validity
- Contruct validity
- Test validity
Solutions:
The correct answer is Test validity.
Key Points
- Test validity:
- Test validity refers to the degree to which a test accurately measures what it is intended to measure.
- In the context of a financial enterprise, ensuring test validity is crucial as it determines the effectiveness of tools such as risk assessment models, financial forecasting methods, and employee selection tests.
- A valid test ensures that the data collected is relevant and can be used to make informed business decisions.
Additional Information
- Content validity:
- This type of validity assesses whether a test comprehensively covers the domain of the content it’s supposed to measure.
- In a financial enterprise, an example would be ensuring that a financial analyst’s test covers all relevant areas such as market analysis, risk management, and financial reporting.
- Criterion validity:
- Criterion validity evaluates how well one measure predicts an outcome based on another measure.
- For instance, a financial enterprise might use criterion validity to see if an aptitude test can predict future job performance.
- Construct validity:
- Construct validity examines whether a test truly measures the theoretical construct it claims to measure.
- In the financial sector, this could involve validating whether a financial well-being survey accurately assesses an individual’s overall financial health.