A simple project structure for doing and sharing data science work.
The easy way to start a data science:
- pet project
- competition
- homework
- etc
- Python 3.5+
- Cookiecutter Python package >= 1.4.0: This can be installed with pip by or conda depending on how you manage your Python packages:
$ pip install cookiecutter
or
$ conda config --add channels conda-forge
$ conda install cookiecutter
cookiecutter https://github.com/mitrofanov-m/cookiecutter-simple-data-science
The directory structure of your new project looks like this:
├── LICENSE
├── README.md <- The top-level README for developers using this project.
├── requirements.txt <- The requirements file for reproducing the analysis environment, e.g.
│ generated with `pip freeze > requirements.txt`.
│
├── setup.py <- makes project pip installable (pip install -e .) so src can be
│ imported.
│
├── data
│ ├── external <- Data from third party sources.
│ ├── interim <- Intermediate data that has been transformed.
│ ├── processed <- The final, canonical data sets for modeling.
│ └── raw <- The original, immutable data dump.
│
├── models <- Trained and serialized models, model predictions, or model summaries.
│
├── notebooks <- Jupyter notebooks. Naming convention is a number (for ordering),
│ the creator's initials, and a short `-` delimited description, e.g.
│ `1.0-jqp-initial-data-exploration`.
│
├── misc <- Miscellaneous files: figures, docker files, additional markdown files, etc.
│
└── src <- Source code for use in this project. The written name of the project
│ will be used.
├── __init__.py <- Makes src a Python module.
│
├── data <- Module to download, generate data or turn raw data into features
│ │ for modeling.
│ ├── make_dataset.py
│ └── build_features.py
│
├── models <- Module to train models and then use trained models to make
│ │ predictions.
│ └── baseline.py
│
└── visualization <- Scripts to create exploratory and results oriented visualizations.
└── visualize.py
We welcome contributions!
pip install -r requirements.txt