Why you shouldn't use SELECT * in production

January 20, 2026 by Maciej Adamski

TL;DR: Using SELECT * in production is a high-interest loan on your future stability. In Go, it leads to silent crashes during schema changes, destroys database performance by bypassing indexes, and creates "dark code" that is impossible to refactor. Use explicit column lists and low-abstraction drivers like pgx to keep your system predictable.


In a quick prototype or a local script, SELECT * feels like a shortcut. It's easy and fast. But in a production Go application, that shortcut is a landmine waiting for a teammate to step on it.

Software engineering is the art of making trade-offs. While SELECT * offers convenience, it sacrifices stability. Here is why high-level engineering requires explicit column selection.

What is the "positional scan" crash?

In Go, the standard database/sql package—and even lower-level drivers—are remarkably literal. When you use row.Scan(), the program doesn't look at column names; it only sees the order of the data coming off the wire.

  • The implicit contract: Your code expects Column 1 to be an ID (int) and Column 2 to be an Email (string).
  • The fragile reality: If a teammate runs a migration—perhaps adding a MiddleName column between ID and Email—your code will blindly try to shove a string into an integer variable.

The result: Your application crashes in production with a type mismatch error. By explicitly listing your columns, you guarantee the order of the data, making your logic immune to schema changes.

How does SELECT * kill database performance?

Engineering isn't just about how fast a query runs on the database; it's about the entire data pipeline.

Network and memory bloat

Your users table might be lean today. But what happens when someone adds a profile_picture_blob or a massive metadata JSON field? If you use SELECT *, you are now dragging megabytes of unnecessary data across the network and deserializing it into memory for every single query—even if you only needed a username.

The "covering index" hit

This is the Staff-level "why." Databases are fast because of indexes. A "Covering Index" allows the database to answer your query using only the index, without ever touching the actual table (the "heap").

When you use SELECT *, you force the database to fetch every single column from the heap. You effectively kill the database's ability to stay in its "fast lane," turning a millisecond operation into a disk-I/O bottleneck.

Why does greppability matter for refactoring?

Code must be searchable. Imagine you are tasked with renaming a column or deprecating a field.

If you explicitly list email_address in your SQL queries, a simple grep or "Find Usages" in your IDE will show you exactly which parts of your Go application are affected. If you use SELECT *, your code becomes a black box. You have no way of knowing which fields are actually used by the business logic without running the code and hoping for the best.

Explicit code is maintainable code.

Why should you use pgx over "magic" frameworks?

Choosing a tool like pgx over a heavy ORM follows the philosophy of "Simple, not easy."

  • Performance: pgx supports PostgreSQL-specific features like binary protocol and COPY that generic wrappers hide.
  • Transparency: The SQL you write is the SQL that runs. There are no "hidden" queries generated by a library behind your back.
  • Granular control: It gives you PostgreSQL-specific error codes, making it easy to distinguish a connection timeout from a constraint violation.

What does the code look like: Before and after?

❌ The "landmine" (fragile)

func GetUser(ctx context.Context, db *pgx.Conn, id int) (*User, error) {
    var u User
    // DANGER: If the DB schema changes order, this will fail or corrupt data.
    err := db.QueryRow(ctx, "SELECT * FROM users WHERE id=$1", id).Scan(
        &u.ID, 
        &u.Email, 
        &u.Name,
    )
    return &u, err
}

✅ The "contract" (robust)

const userFields = "id, email, name"

func GetUser(ctx context.Context, db *pgx.Conn, id int) (*User, error) {
    var u User
    query := fmt.Sprintf("SELECT %s FROM users WHERE id=$1", userFields)
    
    // Explicit list: even if the table grows to 100 columns, this remains fast.
    err := db.QueryRow(ctx, query, id).Scan(
        &u.ID, 
        &u.Email, 
        &u.Name,
    )
    
    if err != nil {
        return nil, fmt.Errorf("fetch user: %w", err)
    }
    return &u, nil
}

Frequently Asked Questions

Is SELECT * ever acceptable?

In ad-hoc queries, local scripts, or database exploration tools—yes. In production application code—never. The trade-off always favors stability over convenience.

What if my table has 50 columns and I need most of them?

Define a constant like const userFields = "col1, col2, col3, ..." at the top of your repository file. This keeps queries readable and gives you a single source of truth for which columns you depend on.

Does this advice apply to ORMs like GORM?

Yes. Even ORMs that handle column mapping can suffer from performance issues (fetching unnecessary columns) and make refactoring harder. Prefer explicit Select() clauses over default behavior.

How do I catch SELECT * in code reviews?

Add a linter rule or grep check to your CI pipeline that flags any occurrence of SELECT * or select * in your codebase.

Summary

A junior engineer writes code that works today. A Senior engineer writes code that works three years from now, even after dozens of schema migrations.

Avoid SELECT *. Be explicit. Treat your SQL as a strict contract, and your production environment will thank you.

About the Author

Maciej Adamski is a software engineer and founder of Dataglitch, specializing in Go backend development and PostgreSQL optimization. He writes about database best practices, software craftsmanship, and the pursuit of simplicity in code.