1. Introduction to Computationally Supported Resampling Methods in R
1.1. Resampling Fundamentals: History and Basic Principles
1.2. Computational Tools in R: Introduction to the R Environment and Relevant Statistical Packages
1.3. Types of Resampling: Differences and Applications of Parametric and Nonparametric Bootstrap
2. Introduction to Jackknife and Permutation Tests
2.1. Jackknife Theory: Concepts and Applications
2.2. Permutation Tests: Fundamentals and How to Perform Them in R
3. Concepts Related to Empirical Distribution
3.1. Definition and Properties: What It Is and How to Use It
3.2. Constructing the Empirical Distribution in R: Methods and Functions
3.3. Practical Applications: Using the Empirical Distribution in Data Analysis
4. Estimating Standard Errors and Bias Using Resampling
4.1. Concepts of Standard Error and Bias: Definitions and Relevance
4.2. Resampling Methods for Estimation
4.3. Implementation in R: Practical Examples and Case Studies
5. Resampling in Linear Models and Time Series
5.1. Application in Linear Models: Techniques and Examples in R
5.2. Resampling in Time Series: Challenges and Solutions
6. Confidence Intervals and Hypothesis Testing Based on Resampling
6.1. Constructing Confidence Intervals: Resampling-Based Methods
6.2. Performing Hypothesis Testing: Nonparametric Approaches
6.3. Practical Examples in R: Application to Real-World Datasets
7. Applications in Machine Learning: Bagging and Boosting
7.1. Fundamentals of Bagging: Concepts and How to Implement It in R
7.2. Introduction to Boosting: Theory and Practical Applications
7.3. Advantages of Ensemble Methods: Improving Accuracy and Reducing Overfitting
7.4. Practical exercises: Applying bagging and boosting to machine learning projects