+```
+
+### Styling
+
+Modify colors and styles in `styles.css`. Key CSS variables are defined at the top:
+
+```css
+:root {
+ --primary-color: #2563eb;
+ --secondary-color: #7c3aed;
+ /* Add more custom variables */
+}
+```
+
+## Contributing
+
+We welcome contributions! Please feel free to submit issues or pull requests.
+
+1. Fork the repository
+2. Create your feature branch (`git checkout -b feature/amazing-feature`)
+3. Commit your changes (`git commit -m 'Add some amazing feature'`)
+4. Push to the branch (`git push origin feature/amazing-feature`)
+5. Open a Pull Request
+
+## Future Enhancements
+
+- [ ] Add interactive visualizations using D3.js or Three.js
+- [ ] Integrate Jupyter notebooks for live demos
+- [ ] Add research paper listings with links
+- [ ] Create video tutorials section
+- [ ] Add team member profiles
+- [ ] Implement search functionality
+
+## License
+
+This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
+
+## Contact
+
+For questions or collaborations, please visit our [GitHub repository](https://github.com/PrecisionNeuroLab/MechanisticInterpretabilityShowcase).
+
+## Acknowledgments
+
+- Inspired by research in mechanistic interpretability
+- Built for the Precision Neuro Lab community
diff --git a/docs/README.md b/docs/README.md
new file mode 100644
index 0000000..074950a
--- /dev/null
+++ b/docs/README.md
@@ -0,0 +1,35 @@
+# Documentation
+
+Welcome to the Mechanistic Interpretability Showcase documentation!
+
+## Contents
+
+- [Getting Started](getting-started.md) - Quick start guide
+- [Examples](examples.md) - Example use cases and demos
+- [Contributing](contributing.md) - How to contribute to the project
+
+## Overview
+
+This showcase demonstrates tools and techniques for understanding neural network behavior through mechanistic interpretability. Our goal is to make AI systems more transparent and interpretable.
+
+## What is Mechanistic Interpretability?
+
+Mechanistic interpretability is the study of understanding neural networks by identifying and analyzing the specific algorithms and circuits they learn. Rather than treating neural networks as black boxes, this approach aims to reverse-engineer their internal mechanisms.
+
+### Key Concepts
+
+1. **Feature Visualization**: Understanding what individual neurons or layers respond to
+2. **Circuit Analysis**: Identifying computational pathways within networks
+3. **Activation Patterns**: Analyzing how information flows through the network
+4. **Intervention Studies**: Testing hypotheses about network behavior through targeted modifications
+
+## Resources
+
+- [Main Showcase Page](../index.html)
+- [GitHub Repository](https://github.com/PrecisionNeuroLab/MechanisticInterpretabilityShowcase)
+
+## Quick Links
+
+- **Research**: Learn about our latest findings
+- **Demos**: Try interactive demonstrations
+- **Tools**: Access our open-source tools
diff --git a/docs/examples.md b/docs/examples.md
new file mode 100644
index 0000000..0c7a02a
--- /dev/null
+++ b/docs/examples.md
@@ -0,0 +1,218 @@
+# Examples
+
+This document provides examples of how to extend the Mechanistic Interpretability Showcase.
+
+## Adding New Research Cards
+
+To showcase a new research topic, add a card to the research section:
+
+```html
+
+
Attention Head Analysis
+
Methods for understanding and visualizing attention mechanisms in transformer models.
+ `;
+ document.body.appendChild(modal);
+
+ modal.querySelector('.close').onclick = () => modal.remove();
+}
+```
+
+## Best Practices
+
+1. **Keep it Simple**: Start with simple examples and add complexity gradually
+2. **Mobile First**: Design for mobile devices first, then enhance for desktop
+3. **Accessibility**: Use semantic HTML and ARIA labels
+4. **Performance**: Optimize images and minimize JavaScript
+5. **Documentation**: Comment your code and update docs when adding features
+
+## Additional Resources
+
+- [MDN Web Docs](https://developer.mozilla.org/) - Web development reference
+- [D3.js Gallery](https://observablehq.com/@d3/gallery) - Visualization examples
+- [Three.js Examples](https://threejs.org/examples/) - 3D visualization examples
diff --git a/docs/getting-started.md b/docs/getting-started.md
new file mode 100644
index 0000000..03bfab8
--- /dev/null
+++ b/docs/getting-started.md
@@ -0,0 +1,99 @@
+# Getting Started
+
+## Quick Start
+
+This guide will help you get started with the Mechanistic Interpretability Showcase.
+
+### Viewing the Showcase
+
+The easiest way to view the showcase is to open `index.html` in your web browser.
+
+#### Method 1: Direct File Opening
+
+Simply double-click the `index.html` file in your file explorer, and it will open in your default browser.
+
+#### Method 2: Local Web Server (Recommended)
+
+Using a local web server provides a better development experience:
+
+**Using Python:**
+```bash
+# Navigate to the project directory
+cd MechanisticInterpretabilityShowcase
+
+# Start a simple HTTP server
+python -m http.server 8000
+
+# Open http://localhost:8000 in your browser
+```
+
+**Using Node.js:**
+```bash
+# Install http-server globally (one-time setup)
+npm install -g http-server
+
+# Start the server
+http-server
+
+# Open http://localhost:8080 in your browser
+```
+
+**Using VS Code:**
+1. Install the "Live Server" extension
+2. Right-click `index.html`
+3. Select "Open with Live Server"
+
+### Project Structure
+
+```
+MechanisticInterpretabilityShowcase/
+├── index.html # Main HTML page
+├── styles.css # Styling and layout
+├── script.js # Interactive functionality
+├── docs/ # Documentation
+│ ├── README.md # Documentation home
+│ ├── getting-started.md # This file
+│ └── examples.md # Example use cases
+├── .gitignore # Git ignore file
+├── README.md # Project README
+└── LICENSE # License information
+```
+
+### Customizing the Showcase
+
+#### 1. Update Content
+
+Edit `index.html` to modify:
+- Hero section text
+- Research topics
+- Demo descriptions
+- About section
+
+#### 2. Change Styling
+
+Edit `styles.css` to customize:
+- Colors (defined in CSS variables)
+- Fonts
+- Layout
+- Animations
+
+#### 3. Add Interactivity
+
+Edit `script.js` to add:
+- Custom animations
+- Interactive features
+- Data visualizations
+- User interactions
+
+### Next Steps
+
+1. Explore the [Examples](examples.md) to see what you can build
+2. Check out the main [README](../README.md) for contribution guidelines
+3. Visit the showcase page to see the interface
+
+## Need Help?
+
+If you encounter any issues or have questions:
+- Check the documentation in the `docs/` folder
+- Open an issue on [GitHub](https://github.com/PrecisionNeuroLab/MechanisticInterpretabilityShowcase/issues)
+- Review the examples for common patterns
diff --git a/index.html b/index.html
new file mode 100644
index 0000000..c9e289b
--- /dev/null
+++ b/index.html
@@ -0,0 +1,99 @@
+
+
+
+
+
+ Mechanistic Interpretability Showcase
+
+
+
+
+
+
+
+
+
+
+
Mechanistic Interpretability Tools
+
Exploring and understanding the inner workings of neural networks
+ We develop cutting-edge tools and techniques for understanding how neural networks process information
+ and make decisions. Our work focuses on making AI systems more transparent and interpretable.
+
+
+
+
Feature Visualization
+
Tools for visualizing what neurons and layers learn in neural networks.
+
+
+
Circuit Analysis
+
Methods for identifying and understanding computational circuits within models.
+
+
+
Activation Patterns
+
Techniques for analyzing how information flows through network architectures.
+
+
+
+
+
+
+
+
Interactive Demos
+
+ Explore our interactive demonstrations to see mechanistic interpretability in action.
+
+
+
+
Coming Soon
+
Interactive demos will be added here to showcase our interpretability tools.
+
+
+
+
+
+
+
+
About
+
+ This showcase presents mechanistic interpretability tools developed by the Precision Neuro Lab.
+ Our mission is to make neural networks more understandable and transparent through rigorous
+ analysis and visualization techniques.
+