Quick Start: Your First Organic Flow Experiment

Get started with Organic Flow in 5 minutes. No special tools required - just Git and your favorite editor.

1 Initialize Your Repository

Create a new repository with the Organic Flow structure:

mkdir my-project && cd my-project
git init

# Create the knowledge structure
mkdir -p inputs/problems inputs/ideas inputs/observations
mkdir -p assumptions specifications learnings patterns knowledge

# Create the experiments registry
echo "experiments: []" > experiments.yaml

# Initial commit (knowledge only!)
git add .
git commit -m "Initialize Organic Flow repository"

2 Document Your First Input

Start with a problem, idea, or observation:

# Create an input document
cat > inputs/problems/slow-test-suite.md << 'EOF'
# Slow Test Suite

**Created**: $(date +%Y-%m-%d)
**Submitted by**: Your Name

## The Problem

Our test suite takes 15 minutes to run, making the feedback loop painfully slow. 
Developers avoid running tests locally, leading to more CI failures.

## Current Impact
- Developers skip tests before pushing
- CI pipeline bottleneck
- Slower feature development

## Constraints
- Must maintain test coverage
- Limited budget for new infrastructure
EOF

git add inputs/problems/slow-test-suite.md
git commit -m "Document slow test suite problem"

3 State Your Assumption

What do you think will work?

cat > assumptions/parallel-test-execution.md << 'EOF'
# Assumption: Parallel Test Execution

**Created**: $(date +%Y-%m-%d)
**Related to**: inputs/problems/slow-test-suite.md

## Hypothesis

If we run tests in parallel using pytest-xdist,
then we can reduce test suite runtime by 60-70%,
given our tests are mostly CPU-bound and independent.

## Validation Criteria
- Test suite runs in under 5 minutes
- No flaky test failures from parallelization
- Easy to run locally and in CI

## Risks
- Some tests might have hidden dependencies
- Database tests might conflict
- Memory usage might spike
EOF

git add assumptions/parallel-test-execution.md
git commit -m "Assume parallel execution will speed up tests"

4 Run Your Experiment

Create an experiment branch to test your assumption:

# Create experiment branch
git checkout -b experiment/parallel-tests

# Now write code to test your assumption
# Install pytest-xdist, update test configuration, fix any issues...
# This is where you do actual coding!

# Track the experiment in main
git checkout main
cat >> experiments.yaml << 'EOF'

  - id: parallel-test-execution
    type: experiment
    branch: experiment/parallel-tests
    started: $(date +%Y-%m-%d)
    status: active
    assumption: "Parallel execution will reduce test time by 60-70%"
EOF

git add experiments.yaml
git commit -m "Track parallel test execution experiment"

5 Harvest Your Knowledge

Extract learnings from your experiment (succeed or fail):

# After experiment completes, create a Knowledge PR
cat > learnings/parallel-test-execution-results.md << 'EOF'
# Learning: Parallel Test Execution Results

**Created**: $(date +%Y-%m-%d)
**Experiment**: experiment/parallel-tests
**Outcome**: Partial Success

## What We Learned

1. **Performance Gain**: Achieved 65% speedup (15min → 5.5min)
2. **Database Conflicts**: 12 tests failed due to database isolation issues
3. **Memory Usage**: Peak memory increased 3x but stayed within limits

## What Worked
- CPU-bound unit tests parallelized perfectly
- pytest-xdist was easy to integrate
- CI integration was straightforward

## What Didn't Work
- Database tests need separate handling
- Some fixtures weren't thread-safe
- Coverage reporting got complicated

## Recommendations
1. Run database tests separately (not parallel)
2. Mark thread-unsafe tests with @pytest.mark.serial
3. Use --dist loadscope for better test grouping
EOF

git add learnings/parallel-test-execution-results.md
git commit -m "Harvest learnings from parallel test experiment"

🌱 Congratulations!

You've just completed your first Organic Flow cycle. You've learned:

  • How to structure knowledge separately from code
  • How to document inputs before jumping to solutions
  • How to state clear assumptions to test
  • How to run experiments in isolated branches
  • How to harvest knowledge regardless of outcome

Next Steps