MySQL: My First Real Database

SS Saurav Sitaula

Before I understood databases, I stored everything in JSON files and localStorage. Then I discovered MySQL, and suddenly data had structure, relationships, and rules. Here's how I went from 'what's a JOIN?' to actually building things that persist.

The JSON File Era

I need to confess something. My first “database” was a JSON file.

{
  "users": [
    { "id": 1, "name": "Saurav", "email": "saurav@example.com", "role": "admin" },
    { "id": 2, "name": "Alice", "email": "alice@example.com", "role": "user" }
  ],
  "posts": [
    { "id": 1, "userId": 1, "title": "Hello World", "content": "My first post" },
    { "id": 2, "userId": 2, "title": "Second Post", "content": "Another one" }
  ]
}

I’d read the file, parse it, modify the JavaScript object, and write it back. Adding a new user? Read entire file. Parse. Push to array. Stringify. Write back.

It worked for a personal project with 10 records. It fell apart the second two people tried to use the app at the same time. One write would overwrite the other. Data gone. No undo.

I needed a real database. Everyone said MySQL.

Installing MySQL: The First Wall

The installation was an experience. I was on Windows at the time (forgive me), and the MySQL installer had about 47 checkboxes, a root password screen that terrified me, and a configuration wizard that asked questions I didn’t understand.

“Would you like to configure MySQL as a Windows Service?”

I don’t know. Would I?

I clicked “Next” on everything, set the root password to password123 (don’t judge me, I was learning), and somehow ended up with a working MySQL instance.

Then I opened the MySQL command line client:

mysql> 

A blinking cursor. No GUI. No buttons. Just me and SQL.

I typed the first command I’d seen in a tutorial:

SHOW DATABASES;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.01 sec)

It responded. I had a database server running on my machine. I had no idea what information_schema was, but it felt like progress.

Creating My First Database

CREATE DATABASE my_app;
USE my_app;

Two lines and I had a database. Simpler than I expected.

Then the first table:

CREATE TABLE users (
    id INT AUTO_INCREMENT PRIMARY KEY,
    name VARCHAR(100) NOT NULL,
    email VARCHAR(255) NOT NULL UNIQUE,
    role ENUM('admin', 'user', 'editor') DEFAULT 'user',
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

I stared at this for a while. Every word meant something:

  • INT AUTO_INCREMENT PRIMARY KEY — the database assigns IDs for me. No more manually tracking the next ID in my JSON file.
  • VARCHAR(100) NOT NULL — a string, max 100 characters, can’t be empty. The database enforces this. Not my code. The database.
  • UNIQUE — no two users can have the same email. The database enforces this too.
  • ENUM — only these three values are allowed. Try inserting role = 'superadmin' and MySQL says no.
  • DEFAULT CURRENT_TIMESTAMP — auto-fills the creation time.

In my JSON file, none of these rules existed. A user could have an empty name, a duplicate email, or a role of “banana.” The database didn’t care because the database was a text file. MySQL cared. MySQL had opinions.

The First INSERT

INSERT INTO users (name, email, role) VALUES ('Saurav', 'saurav@example.com', 'admin');
INSERT INTO users (name, email) VALUES ('Alice', 'alice@example.com');
SELECT * FROM users;
+----+--------+---------------------+-------+---------------------+
| id | name   | email               | role  | created_at          |
+----+--------+---------------------+-------+---------------------+
|  1 | Saurav | saurav@example.com  | admin | 2019-01-20 14:30:00 |
|  2 | Alice  | alice@example.com   | user  | 2019-01-20 14:30:05 |
+----+--------+---------------------+-------+---------------------+

Alice got id = 2 automatically. Her role defaulted to user because I didn’t specify one. Her created_at filled itself in.

Then I tried:

INSERT INTO users (name, email) VALUES ('Bob', 'saurav@example.com');
ERROR 1062 (23000): Duplicate entry 'saurav@example.com' for key 'email'

The database stopped me from inserting a duplicate email. No if statement in my code. No validation function. The database said no. I’d never felt so protected.

Discovering JOINs: The Real Power

Tables alone weren’t exciting. The magic started when tables talked to each other.

CREATE TABLE posts (
    id INT AUTO_INCREMENT PRIMARY KEY,
    user_id INT NOT NULL,
    title VARCHAR(255) NOT NULL,
    content TEXT,
    published BOOLEAN DEFAULT FALSE,
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    FOREIGN KEY (user_id) REFERENCES users(id)
);

That FOREIGN KEY line. It means: the user_id in posts must exist in the users table. Try to create a post for user ID 999 (who doesn’t exist) and MySQL refuses. Data integrity, enforced at the database level.

INSERT INTO posts (user_id, title, content, published) VALUES 
(1, 'Learning MySQL', 'Today I learned about databases...', TRUE),
(1, 'JOINs Are Cool', 'Let me tell you about JOINs...', TRUE),
(2, 'Hello from Alice', 'This is my first post!', FALSE);

Now the moment I’d been waiting for:

SELECT users.name, posts.title, posts.published 
FROM posts 
JOIN users ON posts.user_id = users.id;
+--------+------------------+-----------+
| name   | title            | published |
+--------+------------------+-----------+
| Saurav | Learning MySQL   |         1 |
| Saurav | JOINs Are Cool   |         1 |
| Alice  | Hello from Alice |         0 |
+--------+------------------+-----------+

Two tables. One query. The data is related. In my JSON file, I would have had to loop through posts, find the matching user by userId, and stitch them together manually. Here, the database does it in milliseconds.

-- Only published posts, with author names, newest first
SELECT users.name AS author, posts.title, posts.created_at
FROM posts
JOIN users ON posts.user_id = users.id
WHERE posts.published = TRUE
ORDER BY posts.created_at DESC;

Filtering, sorting, joining—all in one query. All fast. All handled by the database engine, not my application code.

phpMyAdmin: When the Terminal Got Scary

Let’s be honest: the MySQL command line was intimidating. Especially for complex queries and large result sets.

Then I found phpMyAdmin. A web interface for MySQL. Click to browse tables, visual query builder, export/import buttons. It felt like cheating after typing raw SQL.

But here’s what I learned: phpMyAdmin was great for exploring data, but it was a crutch for learning SQL. The developers I admired all wrote SQL by hand. The command line was faster once you knew the syntax. phpMyAdmin was training wheels, and that was fine.

I used phpMyAdmin for months before gradually moving to the terminal. No shame in that.

The Mistakes That Taught Me

Mistake 1: No Indexes

My app had a search feature. Users could search posts by title:

SELECT * FROM posts WHERE title LIKE '%mysql%';

With 100 posts, instant. With 10,000 posts, noticeable lag. With 100,000 posts, painfully slow.

I didn’t know about indexes. MySQL was scanning every single row, checking if the title contained “mysql.” A full table scan. On every search request.

-- Adding an index on frequently searched columns
CREATE INDEX idx_posts_title ON posts(title);

-- For exact lookups, indexes are magic
SELECT * FROM posts WHERE user_id = 1;
-- Without index: scans all rows
-- With index on user_id: jumps directly to matching rows

The query went from 2 seconds to 20 milliseconds. Same query. Same data. One CREATE INDEX statement.

I went back and added indexes to every column I searched or filtered by. It felt like finding a turbo button I didn’t know existed.

Mistake 2: Storing Everything in One Table

My first design looked like this:

CREATE TABLE users (
    id INT AUTO_INCREMENT PRIMARY KEY,
    name VARCHAR(100),
    email VARCHAR(255),
    address_street VARCHAR(255),
    address_city VARCHAR(100),
    address_state VARCHAR(100),
    address_zip VARCHAR(20),
    address_country VARCHAR(100),
    billing_street VARCHAR(255),
    billing_city VARCHAR(100),
    billing_state VARCHAR(100),
    billing_zip VARCHAR(20),
    billing_country VARCHAR(100)
);

Twelve address columns in the users table. What if a user has three addresses? Add address2_street, address2_city…? That way lies madness.

A senior developer looked at my schema and gently said: “Normalize it.”

CREATE TABLE addresses (
    id INT AUTO_INCREMENT PRIMARY KEY,
    user_id INT NOT NULL,
    type ENUM('shipping', 'billing') NOT NULL,
    street VARCHAR(255),
    city VARCHAR(100),
    state VARCHAR(100),
    zip VARCHAR(20),
    country VARCHAR(100),
    FOREIGN KEY (user_id) REFERENCES users(id)
);

Now a user can have any number of addresses. Each address has a type. The users table stays clean. Related data lives in related tables. That’s normalization in practice, and it took me embarrassingly long to internalize.

Mistake 3: Not Using Transactions

I had a checkout flow that did this:

-- Step 1: Deduct from inventory
UPDATE products SET stock = stock - 1 WHERE id = 42;

-- Step 2: Create the order
INSERT INTO orders (user_id, product_id, total) VALUES (1, 42, 29.99);

-- Step 3: Create payment record
INSERT INTO payments (order_id, amount, status) VALUES (LAST_INSERT_ID(), 29.99, 'completed');

Three separate queries. What if step 2 fails? The stock is already deducted but no order exists. Inventory is wrong. Money is gone.

Transactions fix this:

START TRANSACTION;

UPDATE products SET stock = stock - 1 WHERE id = 42;
INSERT INTO orders (user_id, product_id, total) VALUES (1, 42, 29.99);
INSERT INTO payments (order_id, amount, status) VALUES (LAST_INSERT_ID(), 29.99, 'completed');

COMMIT;
-- If anything fails: ROLLBACK (all three statements are undone)

All or nothing. Either all three steps succeed, or none of them do. The database guarantees this. It’s called ACID (Atomicity, Consistency, Isolation, Durability), and I didn’t appreciate it until I’d shipped bugs that violated every single one of those properties.

MySQL in the Real World

By the end of my first few months with MySQL, I had a working mental model:

Tables are structured containers for data. Each row is a record. Each column is a field with a type and constraints.

Relationships connect tables through foreign keys. One-to-many (user has many posts). Many-to-many (posts have many tags, tags belong to many posts—through a junction table).

Queries are questions you ask the database. SELECT to read, INSERT to create, UPDATE to modify, DELETE to remove. JOIN to combine tables. WHERE to filter. ORDER BY to sort.

Indexes make queries fast. Without them, the database scans every row. With them, it jumps to the right rows.

Transactions make operations safe. Multiple related changes happen atomically.

MySQL wasn’t glamorous. It wasn’t the newest thing. But it was everywhere. Every shared hosting plan came with MySQL. Every WordPress site ran on MySQL. Every beginner tutorial used MySQL. It was the Honda Civic of databases—reliable, practical, everywhere.

What I Wish I’d Known Earlier

  1. Learn SQL before learning an ORM. I eventually used Eloquent and Prisma and Drizzle, but understanding raw SQL made me better at all of them. When the ORM generates a slow query, you need to read the SQL to fix it.

  2. Design your schema before writing code. Ten minutes on paper drawing tables and relationships saves ten hours of migrations later.

  3. Index your WHERE clauses. Any column you search, filter, or join on regularly should probably have an index. It’s the single biggest performance win in database work.

  4. Use EXPLAIN to understand your queries. EXPLAIN SELECT ... shows you how MySQL plans to execute a query. Full table scan? Missing index? EXPLAIN tells you.

  5. Backups are not optional. I lost a development database once because I ran DROP TABLE on the wrong terminal tab. There was no backup. I rebuilt the schema from memory. That only happens once.

The Foundation

MySQL was my foundation. Not because it’s the best database (spoiler: that argument never ends), but because it taught me how relational databases think. Tables, rows, columns, keys, joins, indexes, transactions—these concepts are the same in PostgreSQL, SQL Server, Oracle, and SQLite.

I’d go on to use other databases. But every time I sit down with a new one, I’m grateful I started with MySQL. The fundamentals transfer. The SQL transfers. The mental model transfers.

Everything after MySQL was just a dialect.


P.S. — My root password was password123 for about six months before a friend saw it and physically recoiled. I changed it that day. To Password123! with a capital P and an exclamation mark. Security is a journey, not a destination. A long, embarrassing journey.

SS

Saurav Sitaula

Software Architect • Nepal

Back to all posts
Saurav.dev

© 2026 Saurav Sitaula.AstroNeoBrutalism