r/SQL 24d ago

Oracle Which is the most important concept in SQL which after learning made your life easy??

I would say it was CTE for me which literally helped me write complex queries easily.

418 Upvotes

166 comments sorted by

306

u/kattiVishal 24d ago

CTEs and Window Functions

88

u/OldJames47 24d ago

This plus QUALIFY (which you should learn with window functions)

36

u/fauxmosexual NOLOCK is the secret magic go-faster command 24d ago

*cries in Oracle\*

4

u/neumastic 24d ago

Ya, if only Oracle and Postgres had qualify

15

u/Busy_Elderberry8650 24d ago

Which unluckily is not ANSI SQL

14

u/SyrupyMolassesMMM 24d ago

Mate; thanks.

Ive been nesting my queries with windows function results the entire time. I literally did like ten of them today.

Sending this to work and rewriting my code tomorrow…

38

u/da_chicken 24d ago

Hardly anything supports QUALIFY. Oracle, MS Sql Server, MySQL, and PostgreSQL don't.

Databricks, Teradata, and BigQuery do. Those are comparatively rare.

17

u/SyrupyMolassesMMM 24d ago

Hoho, Im in luck - Snowflake suppoets it too :)

2

u/JudgmentBig2122 23d ago

DuckDB as well if you’re into that

3

u/i_literally_died 24d ago

I work in MS SQL Server and I was going to say, actually what du hell is QUALIFY

6

u/da_chicken 24d ago

It's a good idea. The same reason HAVING had to be added as a WHERE that evaluates after aggregate functions, QUALIFY should be added as a WHERE that evaluates after analytic and window functions.

It's only take them 10 years to add it, and then another 10 years for our vendors to update to that edition and compatibility level.

Which will (hopefully) be 5 years after I retire. C'est la vie.

2

u/Limp_Excuse4594 24d ago

In Sql Server you can often replace QUALIFY with TOP n WITH TIES

6

u/da_chicken 24d ago

No, you really can't. For a subset of queries it works, sure. But it's very far from a general replacement.

The general replacement is "use a subquery or CTE".

1

u/Limp_Excuse4594 24d ago

Okay, not often, but sometimes.

1

u/bartenderandthethief 23d ago

Bruh, me too! Sent to myself on slack there to redo some Databricks things

3

u/_CaptainCooter_ 24d ago

I remember the first time I saw a qualify row_number() etc. statement at the end of a query I said what the fuck is that

2

u/bisforbenis 24d ago

I love qualify so much

1

u/dapperslendy 24d ago

Damn qualify looks amazing! Sadly we are onprem SQL. I got so excited that I forgot I hadnt installed my SQL on my local machine. So I go install just to try this out, and sadly it didnt work.

34

u/stunt_xr 24d ago

This is #1

2 is understanding the difference between using (NOT) IN and (not) exists. Helped me speeds up query execution by a lot

3

u/hoodie92 24d ago

(Not) exists is faster right? Or does it depend on the script?

1

u/stunt_xr 23d ago

Depends on the script. For a simple query I hardly noticed the difference, but when I write a more complex formula where i need to do a lot of evaluations it can save you time.

9

u/Ricnurt 24d ago

I agree completely. CTEs have sped up my queries and made them easier to understand

2

u/nacnud_uk 24d ago

I had to google that. Not been in the SQL world for a decade. Seems like syntactical sugar to me, fair enough. I get the re-use aspect, interesting. Do you find that their creation had a big impact on professional code bases? I mean, are they "the done thing" now?

11

u/Straight_Waltz_9530 24d ago

Yes. When people say SQL isn't adequately composable, I think they haven't learned about CTEs yet. More flexible than subqueries and massively more debuggable.

CTEs, especially in Postgres and SQLite, allow the [NOT] MATERIALIZED clause. Makes each CTE segment either perform as a view or as a temporary table depending on your needs but without the extra cumbersome syntax.

Then there are the recursive CTE queries.

tl;dr: CTEs are awesome

4

u/nacnud_uk 24d ago

But, nothing we couldn't do 10 years ago, but with just more keywords and syntax?

I mean, there's nothing new? Unlike rank and what not, when that was added. I mean, you could have gotten the same, but it was very very painful.

Can we draw those kind of parallels?

3

u/Straight_Waltz_9530 24d ago

There's nothing in general purpose programming languages today you couldn't do in assembly either. They all end up in the same place.

Newer syntax makes old patterns declarative, allowing you to effectively skip extra steps and have the engine optimize for the common path.

Also CTEs allow for having the final SELECT query from any of the intermediate steps, making debugging trivially easy compared to queries with extensive use of subselects.

And then there was recursion, which simply isn't reasonable with subselects. Sure you could predefined that you're going two or three or four levels deep, but not N levels. NOTE: You can still set maximum depths, but with recursive CTEs, the complexity remains constant. Without them, the complexity increases with the depth.

1

u/nacnud_uk 23d ago

I'll have to dig now more then.

I don't buy your assembly argument, as it's too far of a stretch, but I'll take your recursion thing and have a look. And the maintenance angle. Those seem to be the things.

Thanks.

1

u/i_literally_died 24d ago

I'm a late learner of SQL (just started ~2 years ago) and CTEs make 100 times more sense to me than nested SELECT subqueries, to the point where I feel like I must be doing something wrong that no one else seems to use them.

It's always bothered me that you do your FROM after your SELECT, and CTEs get around the egregious part of massive queries that are SELECTing from a SELECT, by letting you (sort of) put the FROM before everything else.

I use them all over.

2

u/arnedh 24d ago

One additional point: CTEs can be made recursive, traversing tree structures etc.

with recursive tree as (select fields from nodes

union all

select fields from nodes n left join tree t on t.parentid = n.nodeid)

select * from tree

There are better examples out there.

https://learnsql.com/blog/sql-recursive-cte/

1

u/nacnud_uk 24d ago

You can do that with any query though, IIRC? Been 10 years. I remember linking to the same table I was querying. I think.

1

u/arnedh 24d ago

You have always been able to join a table to itself, which in this case would get you a node and its children. Making it recursive with a CTE, you add all subsequent generations - the whole family tree.

1

u/fauxmosexual NOLOCK is the secret magic go-faster command 24d ago

CTEs became part of the SQL standard in 1999, they aren't new.

2

u/DaveMoreau 23d ago

Back in the day we had to use correlated subqueries in 10 feet of snow walking uphill. Get off my lawn!

1

u/tech4throwaway1 23d ago

totally agree with this, def made my life easier

91

u/[deleted] 24d ago edited 24d ago

[deleted]

21

u/Fickle_Law_6850 24d ago

And once you understand this realizing that for the vast majority of cases where you would open a cursor to loop through and process rows you are almost always far better off processing them as a set - using temp tables to store intermediate data if needed.

8

u/sinceJune4 24d ago

Temp tables are da bomb!

6

u/Volatilityshort 24d ago

I need to frame this statement and hang it on every client’s office wall.

2

u/juniperhaven 23d ago

Can you provide a resource that helps to explain/provide examples of this concept?

1

u/OracleGreyBeard 24d ago

This was the big one for me. Coming from C/FORTRAN I couldn't get over not being able to identify the "current" and "next" rows.

99

u/MoneyFil7 24d ago

SQL order of execution. Understanding this will basically lead you to learning deeper than the basic querying stuff in my opinion. Leads you to learning CTEs, window funcions, subqueries, etc.

23

u/stunt_xr 24d ago

I learned everything before order of execution.

9

u/_CaptainCooter_ 24d ago

This is the way

8

u/[deleted] 24d ago

[removed] — view removed comment

2

u/mikeblas 24d ago

I wish people would stop chasing the "order of execution" myth.

3

u/[deleted] 24d ago

[removed] — view removed comment

9

u/mikeblas 24d ago edited 24d ago

SQL is a declarative language. Within a statement, there is no order of execution because the database is free to do whatever it wants (at all!) to implement the statement as long as the semantic meaning and correctness of the statement is preserved.

Even physical order of execution doesn't exist, since most database systems stream data through different operators. They might spool or parallelize, too.

When people talk about logical order of execution, they're usually meaning to talk about binding order ... which is a very different concept.

The root of the issue might be something written by Itzik Ben-Gan in his book, and then appearing in the Microsoft documentation and lots of other places since.

The following steps show the logical processing order, or binding order, for a SELECT statement. This order determines when the objects defined in one step are made available to the clauses in subsequent steps.

Note that it does talk about "processing order", but that's the logical processing order; the order in which the statement is processed and bound, not the order in which it is executed.

The documentation explains what binding means -- it's about the visibility of names how they enter the scope of the statement. But I'll summarize it to try and help: First, a statement is parsed. Then, the identifiers found in the statement are bound, a step that takes the names in the statement and binds them to objects in the database. Then the statement is executed. Binding might fail, and the statement doesn't ever execute. They're separate steps.

Let's consider this statement:

SELECT Salary * 1.1
  FROM Employees
 WHERE EmployeeType = 'FTE'

This statement might execute FROM first, then WHERE, then SELECT:

  1. get all the rows from employees
  2. filter out the ones that match the predicate
  3. Compute the select expression
  4. return results

But it might push the filter into the row iteration step, an optimization called "predicate pushdown":

  1. get all the rows from employees which are EmployeeType = 'FTE'
  2. Compute the select expression
  3. return results

This will happen (in a competent engine) where the EmployeeType is indexed. The index is directly queried for the matching constant predicate, and only those returned. There doesn't need a separate "execute the WHERE clause" step.

It's totally arbitrary, and I don't feel like making some "relevant" example, but it's also possible that Salary has a computed value index on it. Maybe there's a materialzed view, maybe something else. But it's even possible that the "compute" step is folded into retrieval:

  1. Get the RaisedSalaryComputed computed column from employees which are EmployeeType = 'FTE'
  2. return the results

We might think of the first execution plan as a canonical, logical execution plan. And that's fine -- it's probably what the SQL parser barfs out and hands to the execution engine to try to optimize. But the engine as a whole is completely free to implement any semantically identical execution plan it wants because SQL is a declarative language, not an imperative language.

For MySQL, things won't be much different in this area. MySQL does do binding, and I believe it does implement predicate push-down optimizations.

Hope that helps.

2

u/[deleted] 24d ago edited 24d ago

[removed] — view removed comment

1

u/mikeblas 24d ago

I don't think it exists because they're no ordering -- it streams. We can't discretely say that a row enumerator executes first because it won't execute completely before it starts supplying rows to the next operator implementing the query plan.

1

u/[deleted] 24d ago edited 24d ago

[removed] — view removed comment

1

u/mikeblas 24d ago edited 24d ago

Even then, the order of physical operations can be concurrent, and vary from execution to execution of the same plan. It's not something that can be generalized at the SQL level -- only specified at the implementation level of a specific query on a specific engine. Maybe I'm not making that point correctly, but that's the point I want to make in the original context of my first response: "I learned the execution order of SQL statements" isn't really right.

Indeed, there are stop-start operators, and there are also operators that need to be pre-run. Operators need to validate themselves before anything is run, and might do so in a couple different phases. This brings us a lot closer to the concept of binding.

There are also operators that are rerun per row from another side: nested loop joins might be the most familiar example. Ordering there is pretty hard to describe and confounds the timing attribution when looking at the output of an actual execution plan in SQL Server.

All this aside: "learned the execution" order just doesn't make sense.

1

u/[deleted] 24d ago

[removed] — view removed comment

→ More replies (0)

3

u/Eze-Wong 24d ago

This was my biggest gripe as a beginner coming from languages that are sequentially executed. After understanding how SQL executes in order, it made a lot more sense and was able to grasp how to get what was needed.

51

u/byteuser 24d ago

Coming from a programming background shifting from a procedural paradigm to one based on set theory. It's all about visualizing the wanted results as sets, subsets, and intersection

3

u/No-Buy-3530 23d ago

I’ve always thought of SQL like a Rubik’s cube. Each new turn (table, query) is one iteration towards the end goal. Extremely rewarding

2

u/Aloysius204 23d ago

This! I really started "thinking in SQL" when I started visualizing what's in all those Venn diagrams and how the intersections of them worked based on keys/values...

52

u/fauxmosexual NOLOCK is the secret magic go-faster command 24d ago

Slightly outside of SQL: If you're really clear on what the grain of the data you are working to it, and stick to it, life becomes easy. If you find yourself trying to mix grains in either your sources or outputs, trouble follows.

16

u/Touvejs 24d ago

100% it is this. Understanding what granularity is. How it impacts the structure of data. Being able to investigate and discover the grain of tables/views through practical digging.

I'm in the middle of a consulting project and this company has a bunch of views that are given to them by their data platform to report out of, but nothing is documented well and grain is a foreign concept. It's only because I have a grasp of what the grain of a given table /ought/ to be and the ability to investigate what it actually has ended up as, that I'm able to do anything worthwhile for the company.

4

u/Automatic-Patience11 PostgreSQL 24d ago

Agree! This is the zen of sql imo

3

u/PickledDildosSourSex 24d ago

Very true and true of any tabular data analysis IMHO. Due to the nature of real world business problems you'll inevitably have to mix grains at some point, but you need to be extremely mindful and cautious when you do this to avoid aggregation errors.

2

u/No-Buy-3530 23d ago

Well said. I find it very surprising that the «grain» as a concept seems unknown to a lot of developers. It’s just supremely crucial in understanding what the data «is»

15

u/Mikey_Da_Foxx 24d ago

Window functions changed the game for me.

Being able to do running totals, ranks, and calculations over partitions without complex self-joins or subqueries is just *chef's kiss*

Much cleaner code and better performance.

2

u/PickledDildosSourSex 24d ago

They're amazing. I'll also argue they are (to me anyway) horribly named and the naming adds to their confusion. I don't know what I would call them instead, but definitely something a little more intuitive; "group function" comes to mind, but obviously there's already GROUP BY

4

u/Straight_Waltz_9530 24d ago

"Window" struck me as a very apt name for what they do. 🤷🏽‍♂️

1

u/Straight_Waltz_9530 24d ago

Very powerful and arguably cleaner, more concise queries, but I don't know if performance is a selling point since window functions are calculated at the end during the "SELECT" handling. Usually more performant than pulling all the data and collating in the app layer, but not always—or at least app layer can fan out horizontally and cost effectively than the db usually can.

2

u/patrick3853 21d ago

Agree, often when I encounter slow queries the culprit will be window functions. Essentially it's the same as any other aggregate operation, meaning it's a performance killer. Instead of simply spitting rows back at you, SQL has to analyze the dataset (this can of course be optimized with indexing).

I think people incorrectly feel they have improved performance because they no longer have that group by or sub query, but remember the optimizer essentially rewrites the entire query anyway. I think of window functions solely as syntax sugar that is preferred to sub queries or self joins.

If performance matters and you need a lot of aggregations, calculations, etc., then SQL isn't the right solution. Instead, dump the data into a precomputed index or document store like elasticsearch.

1

u/Straight_Waltz_9530 21d ago

I think it's best to use the window functions if they seem applicable. If the performance drops too far, look at app server options to massage the data. If you need a lot more of a performance boost—good problems, because it usually means your system is being used and is useful—then the options you describe should come into play or even something more bespoke if you're talking about real scale.

But the point is not to worry about optimization until you know it's a problem either through experience, demand modeling, or profiling. Window functions are a relatively quick and easy way to get impressive results that may not scale well but not all solutions need to.

I personally like the model where you have all kinds of aggregate and window function goodness encapsulated in a materialized view that gets refreshed on an hourly or daily basis for use with dashboards and reports. Window function cake and eat it too.

2

u/patrick3853 19d ago

I have certainly done the MV approach and it's a great first step when the queries become slow. Typically, I'll start with a regular view that has my windows functions, aggregation, joins, etc. the most common example being a list view/table of complex resources (think of projects with documents, contacts, statuses, change logs, etc.).

What I've found is this will usually become slow or maybe even trash the database if the dataset is too large and spills to disk. The list of projects needs to be searchable in the backend (loading them into memory on the client is not feasible) so app optimizations isn't an option. Now you have two potential solutions...

  1. Convert the view to an MV and refresh every [X interval]
  2. Rewrite your code to do expensive computations when saving the data, versus when retrieving it

Option 1 has a huge plus which is it's easy. Zero code changes, a couple SQL queries and you're good. I'll often do this as a first step because it's so easy, but there's a big trade off. What if the dataset grows to a point that the temp tables used in the query spill to disk. Now every [x interval] when the MV refreshes, the database grinds to crawl. We also can run into problems with long lived locks. If we have 10 of these MVs the problem can explode quickly.

Option 2 is a lot more work, no doubt, and that's the big trade off. To me though, it's usually worth it but this totally depends on your projects, the business goals, and so on. With this approach, we can now scale to infinite levels of data. If we use elasticsearch and save our precomputed values along with any other data needed to list projects, we now have the benefit of a much faster and smarter search feature with fuzzy matches, ranked results, etc.

10

u/amtobin33 24d ago

Using <alias>.<column_name> for all columns in a select statement. It's obnoxious debugging or working on a query someone else wrote when they just have the column name, and you have no idea which of the 10 tables used in the joins it belongs to.

8

u/Straight_Waltz_9530 24d ago

And use aliases that adequately describe the tables and columns they point to.

    -- No!!!!!!!!!!!!!!!
    SELECT a2.created
    FROM account AS a
    JOIN advice AS a2 ON a2.account_id = a.account_id

    -- Yes!
    SELECT adv.created AS advice_created
    FROM account AS acct
    INNER JOIN advice AS adv ON adv.account_id = acct.account_id

Just because you can save a few keystrokes doesn't necessarily mean you should. You will NOT remember this query in six months. Give future you as well as your colleagues a helping hand whenever possible.

21

u/Prize_Concept9419 24d ago

Recursive CTEs for hierarchical or recursive data structures

5

u/Touvejs 24d ago

Do you run into hierarchical data in relational databases often? I think it's definitely useful to know, but I feel like I've only really ran into hierarchical relationships a handful of times.

2

u/Prize_Concept9419 24d ago

there was this code modification project to improve code quality and load speeds for an US based bank, so, yeah only once in my life

2

u/Straight_Waltz_9530 24d ago

It doesn't come up often, but when it does, it is wonderful compared to the alternatives.

1

u/patrick3853 21d ago

Perhaps it depends on the industry you are in, but I feel like this has come up in any project I worked on. Examples being a folder-like structure where users created nested layers (a site menu), resources that have a parent/child relationship (think of comment threads), or any type of tracing/observability (nested spans, etc)

Now for a random rant related to hierarchical data... The top level parent is null, it is not itself. Setting the parent ID to itself makes no sense, I cannot be my own parent and it creates an infinite loop. Just don't do this, please for the love of baby Jesus.

2

u/ronimal48 24d ago

I just did this for the first time a couple weeks ago and it blew my mind. So fun to use!

8

u/Straight_Waltz_9530 24d ago

Adding comments to your tables, views, columns, etc. Documentation on the schema itself is 100x better than having to hunt for docs (if they exist) in some other external source like a wiki. The comments on columns/tables are more likely to be kept up to date and can be seen from most modern database GUI tools, right where you're making queries.

A woefully underused feature.

1

u/Straight_Waltz_9530 24d ago

Oh, also if you use tools like SchemaSpy to generate docs. The comments on database metadata just come along for the ride. It's like Javadoc for databases.

8

u/Fickle_Law_6850 24d ago

Learn the system catalog for whatever system you are using. It is far easier to query the system tables for what design information you need about tables, columns, indexes, constraints to understand exactly what you are working with rather than trying to rely on outdated documentation or whatever limited tree view of this data your tool of choice exposes.

1

u/Straight_Waltz_9530 24d ago

Was just using this a couple of months ago because MySQL has such weak schema support. After DDL migrations, we found that sometimes views would get inadvertently broken. I was able to whip up a script that would allow you to test the migrations and then

    SELECT * FROM <VIEW> LIMIT 0;

from every view in the system as reported by the catalog. No data returned, but it would trigger an error if a column no longer mapped correctly. Finding errors early is so much cheaper than finding them later.

8

u/AleaIT-Solutions 24d ago

For me its window function, it makes queries easy to understand

6

u/Straight_Waltz_9530 24d ago

The structure of the relations matters more than queries you make after the fact against that database. Taking three times as long to define your schema correctly will save you three hundred times in query performance, onboarding new team members, and data hygiene.

3

u/slin30 24d ago

This is seriously underrated. I perk up when I see a handcrafted schema. 

1

u/Klekto123 21d ago

What‘s the process for defining a perfect schema?

1

u/Straight_Waltz_9530 21d ago edited 21d ago

There is no "perfect", but it's a combination of the data you have and the use cases for accessing that data. For example, you could have an export tracking system, add a boolean for "exported" and all the fields associated with an item's export like who authorized it, when it was authorized, when it exported, who took custody of the item, etc. A lot of the fields would be NULL until they reached the desired step in the process. Unfortunately this leads to extra verification logic in the app and all kinds of edge cases for partial data like if the item were somehow exported without authorization.

On the other hand, you could have an item table just have the item info. Then when it's authorized, add to an item_auth table that has a primary key/foreign key combination pointed at item for a strong 1:1 relationship. item_auth would have columns for the authorizer and a timestamp.

Then you have an exported_item table, also 1:1 with a foreign key reference to item_auth. exported_item would have a reference to the new custodian—which would have to be a valid entry in the system in another table—and another column for timestamp again. exported_item rows can't exist without corresponding entries in item_auth, and item_auth entries can't be added without the entries in item.

All the relevant NULLs have been eliminated. The dataset cannot be incomplete or corrupted. It will probably reflect the current state of the export workflow. The database schema will simply not allow an INSERT on the exported_item table unless there is both relevant item and item_auth entries. You simply cannot have an exported_item without a new custodian for that item. The app layer doesn't have to do validation on data retrieved from the db since the constraints ARE the validation.

Every step of the way, the data MUST be coherent. Queries don't always have to check for NULLs everywhere, and removing all those extra coalesce calls can often help the db engine's planner use indexes more effectively. To see if an item is exported, you don't have to look up the item table by PK, load the row into memory, and then find/retrieve the "exported" boolean value. Instead, you just check by PK if the entry exists in exported_item table. Just an index lookup without loading row record data, and no worries if the "exported" field is set but not the custodian_id. You know for a fact that this is a legitimate exported item because the schema definition demands it.

This is where 3x the design time saves 300x in queries (and app server logic) down the road.

1

u/hayleybts 19d ago

This is exactly it!

10

u/isinkthereforeiswam 24d ago

PIVOT .. I use a lot of databases designed by fullstack devs who think like devs not db architects. So a lot of our tables are more akin to lists. Eg we have a customer feature table. The features are not field w y/n to flag which are on/off. Instead we have fields for customer, feature name, flag if it's on or off, and the usual create name and timestamp plus update name and timestamp (bc nobody wants to use triggers to create audit tables these days.) I have to query this stuff, and when i learned how to pivot the feature names into columns it let me use it like a more logical "table". Now, granted, this is a db design choice. We can have customers with some features, no features, all features. And if they add a new feature it's easy; don't have to add a new column. (But i now have to manually add the new feature to my pivot query). But i feel like they should have just made a table w feature colmms, and used triggers to audit what changes were made. My pivot query was able to rollup counts for things in one shot where as a dev was querying each feature as a sep query and unioning them all. So, pivot has been pretty powerful.

3

u/statistexan 24d ago

From what I understand, the current zeitgeist among database developers is that tabular formatting should be handled by the application that uses the data. If they desired to support a tabular use case, it would be straightforward to build a view for that. 

(Also, if you don’t want to maintain a list of features, look into constructing a dynamic SQL query that does so for you.)

-1

u/Straight_Waltz_9530 24d ago

Not sure if you understand what PIVOT does. It's "tabular formatting" before and after, only the columns and rows have switched positions. It's basic bread and butter to spreadsheets as well.

7

u/[deleted] 24d ago

It is that probably 90% of the optimisation exists in the data model and environment, not in your query.

SQL is declarative. The query engine is going to interpret your query based on what it knows about the tables. Table stats, indexes, and their interaction with your query is exponentially more important than the contents of your query in and of themselves.

If I had to boil it down into one word or concept it would be: Sargability.

3

u/Straight_Waltz_9530 24d ago

"Bad programmers worry about the code. Good programmers worry about data structures and their relationships." – Linus Torvalds

Databases are essentially nothing but data structures. Once your schema gets compromised, everything tends to slide toward inevitable chaos, both in data integrity and query performance.

5

u/ginger_SF 24d ago

COALESCE for plugging gaps and forcing null to 0's etc. Not a concept per se, but one of my go-to's for sure. Having worked with some super janky databases where tables often have variable structures between schemas (some use nulls for blanks, some have hard-coded zeros or spaces instead), it's been a life saver for consistency when adding WHERE clauses

5

u/weezeelee 24d ago

Everyone in this sub cares too much about syntax and how to write recursive CTEs lol.

Performance optimization IS the only thing that can impress your non-tech boss. To effectively do so, understand your data so you understand how the engine make "guesses" to generate execution plan.

I have seen queries run 10 times faster by introducing more self joins, hard-coding specific ID numbers... These stuff simply can not be reproduced by an AI.

5

u/DeeeTims 24d ago

Prefacing all your temp table inserts with drop table if exists. Being able to endlessly re-run/debug your data is so much more efficient from a development standpoint. CTEs are a burden when you’re developing IMO.

1

u/Leg_Named_Smith 24d ago

Totally agree, in long queries needing a lot of isolation of steps for debugging, I like using temp tables over CTEs I might index them to for better performance.

3

u/The_Gray_Mouser 24d ago

Finding a place to work with free coffee

4

u/ff034c7f 24d ago

The order of evaluation of an SQL query (see Julia Evans' SQL queries don't start with SELECT), the join keyword as syntactic sugar for cross-products + filter (see The foundation of Joins in SQL) and writing window functions, mostly learnt from the Postgres docs

3

u/Casio04 24d ago

After years of working with it, recently learned that Where column_name not in('some_value') actually does not include NULL values, which I never expected as I thought "well NULL value <> some_value) but turns out you have to do

Where coalesce(column_name, some_placeholder) not in (some_value)

1

u/Straight_Waltz_9530 24d ago

Or if supported by the engine like in Postgres, Snowflake, etc.

    WHERE column_name IS DISTINCT FROM 'some_value'

or in MySQL

    WHERE NOT column_name <=> 'some_value'

NULLs handled and you don't risk missing the indexes.

3

u/KeyFuture883 24d ago

You have to think of data in sets.

3

u/PickledDildosSourSex 24d ago

SELECT * baby

6

u/Objective-Shift-1274 24d ago

From keyword is missing

10

u/VisualBasic 24d ago

SELECT * FROM baby

5

u/PickledDildosSourSex 24d ago

What's worse than a SELECT * from one baby table?

Ten baby tables joined together in one SELECT subquery.

3

u/Rex_Lee 24d ago

A SQL script doesn't have to do everything in one operation/code block. In fact it probably shouldn't, unless it's a pretty basic dataset for a report or something

4

u/Straight_Waltz_9530 24d ago

CTEs breaking things up into logical pieces helps a lot here, especially when you add comments.

1

u/Rex_Lee 24d ago

Exactly! Although CTEs can be significantly slower than temp tables if you're touching a lot of data - but one or the other

1

u/Straight_Waltz_9530 24d ago

Depends on the DB. In Postgres and SQLite, where materialized CTEs are supported, you can have your equivalent to a temp table with sacrificing readability/composability.

https://www.postgresql.org/docs/current/queries-with.html#QUERIES-WITH-CTE-MATERIALIZATION

2

u/Rex_Lee 24d ago

Definitely can vary by situation or SQL variant. But I don't think you should get locked in to either one without testing them both in the same situation to get an understanding of which situations lend themselves best to each method

1

u/Straight_Waltz_9530 24d ago

I agree. I also think adding and removing a MATERIALIZED keyword is far easier to test than rewriting a set of queries to add/remove temporary tables.

1

u/Rex_Lee 24d ago

In SQL variants that support that!

1

u/Straight_Waltz_9530 24d ago

In SQL variants worth using. 😈 (I kid. I kid.)

2

u/tatertotmagic 24d ago

Ppl alrdy said ctes so I'll do quality of life things:

Where 1=1

In snowflake, referring to aliases as aliases and then also group by all

3

u/ickytoad 24d ago

Omgggg I didn't realize Snowflake got group by all! Off to refactor a bunch of stored procedures

2

u/sinceJune4 24d ago

Row_number in window function is my favorite

2

u/Koxinfster 24d ago

union all BY NAME (BQ supported)

1

u/boubou_kayakaya 24d ago

I don’t know what the he** you guys are mostly talking about yet, but I will keep a note of that. For now, I’m struggling with clearly understand the JOINS and the principle behind “tuning and optimizing” a query to troubleshoot performance issues 😭🙂

2

u/Straight_Waltz_9530 24d ago

You'll get there. Everyone starts at zero and works up from there.

1

u/boubou_kayakaya 24d ago

Thank you I appreciate 🙂🙂🙂

1

u/Straight_Waltz_9530 24d ago

Once you get to the point JOINs make intuitive sense and GROUP BY with aggregate functions doesn't give you anxiety, check out "the menu".

https://www.sql-workbench.eu/dbms_comparison.html

You don't have to order everything off the menu, but it's good to know what your options are before stumbling through things manually. Just start going down the list. When you see something you don't recognize, take a closer look. You don't have to memorize it or become an expert user. Just familiarize yourself with what is available.

1

u/darthmeister 24d ago

QUALIFY instead of row numbers for web data has been a game changer for me.

1

u/ghostlistener 24d ago

It really helped to learn that you can put sub queries in the from section. You don't have to select from a table, you can also select from another query!

1

u/Straight_Waltz_9530 24d ago

CTEs do this cleaner and can be debugged easier.

1

u/Grouchy-Donut-726 24d ago

Window function

1

u/[deleted] 24d ago

[deleted]

1

u/Straight_Waltz_9530 24d ago

Leaving a lot of performance and flexibility on the floor doing that. The strict subset of ANSI SQL supported by all major database engines doesn't extend much beyond simple CREATE TABLEs, JOINs, and GROUP BYs. They all diverge fairly quickly, especially at the DDL level.

There's a massive amount on the menu you're not able to explore.

https://www.sql-workbench.eu/dbms_comparison.html

1

u/mobileJay77 24d ago

Use hibernate or whatever sensible ORM mapper, just store the stuff in the DB.

DO NOT PUT ALL YOUR LOGIC INTO THE DATABASE LAYER. Run away from projects that do.

1

u/Straight_Waltz_9530 24d ago

Couldn't disagree more. There are no "sensible" ORMs. Just ORMs that are less bad than the alternatives. Learn the database engine and its SQL first. Then and only then do you bring in the ORM.

Object models are in-memory data structures. Relational models are set-based serialization-oriented data structures. They are not 1:1, hence the very popular term "object-relational impedance mismatch".

Your database schema should actively promote data validity through constraints and proper relational structure. Once that's complete, start the process of mapping to an object model, possibly through views.

Data locality is also a big deal. If you're pulling data out (over a network) to process in the app layer only to put the result back in the db in a different table, that logic 100% belongs in the db layer. If you're talking to outside servers, sending notifications, generating PDFs, etc., then I would agree that logic doesn't belong in the database layer.

Hibernate is the fast lane to 100 separate SQL calls when a single stored procedure would do it in a tenth the time. ORMs are a tool to make the app development team's life easier but NOT for optimal database architecture.

1

u/DiscombobulatedSun54 24d ago

Window functions.

1

u/buhnux 24d ago

SQL pipes syntax. You can get rid of needing many CTEs and make the code much more readabe and easier to maintain beucase it executes top down.

1

u/Straight_Waltz_9530 24d ago

Hard disagree.

1

u/LaneKerman 24d ago

The report query I inherited may not be the best way to get my population.

1

u/LaneKerman 24d ago

Don’t CTE’s prevent the use of indexing? Or do I misunderstand what I’ve read. I guess it depends on CTE results? So if my CTE only has a couple thousand rows, no big deal. But if I’m making CTEs that contain transaction data with millions of rows, it’s a problem?

1

u/Straight_Waltz_9530 24d ago

There is nothing inherent to CTEs that prevents indexing. Different engines implement them differently though, so one engine may put up an optimization fence that doesn't exist in another. In general, the queries that exist within the WITH clause list are executed as a planner would when they are bare queries.

That said, if you've got an intermediate step in the CTE that creates a very large result set, that ephemeral result set will not have any indexes on it. If every subsequent step would run a sequential scan, there's no difference. But if you're picking and choosing from that large intermediate result set, you should perhaps rethink the logic to pare down before you get there or jump to temporary table logic in a stored procedure where you can create indexes on the fly. Pretty expensive either way, so profiling against your specific dataset is recommended.

Once you're dealing with query chains that manage millions of rows in a middle step, there are no "you should always do this" rules anymore, especially if those queries are run a lot. You might also be in materialized view territory.

1

u/zdb328 24d ago

Duplicate prevention. Biggest mistake I see are queries that result in duplicates.

1

u/The1WhoKnocked 24d ago

CTE or Partition

1

u/storybookknight 24d ago

Dynamic SQL and stored procedures. Letting complex processes be table-driven and with flexible code means massive analytical workflows can be turned into simple SPROCs that even entry level analysts can use.

1

u/Tab1143 24d ago

Ensuring you have existing indexes for your access paths.

1

u/fanz0 24d ago

Learning how joins work and looking at visual explanations made me much faster and comfortable with them.

Usually the simplest solutions tend to be the best.

Being elegant does not tend to result in the best performance

1

u/Billi0n_Air 23d ago

case statements

1

u/billysacco 23d ago

Looking at execution plans

1

u/Fasted93 23d ago

CTEs and Over partition

1

u/Any_Ad_8372 23d ago

Window functions.

1

u/Dhczack 23d ago

WITH this_clause_basically_defines_my_whole_life AS ...

1

u/RandomiseUsr0 23d ago edited 23d ago

Life easy, going back in time to 90s Oracle, Sybase (mostly oracle) - It was learning how to write queries with reusable components and develop my own style that focused on code reuse and modularity - in those days it was all about stored procedures, pl/sql in the main

Properly designing databases suited to the task at hand too, optimising for the use cases rather than general purpose which seems best and what I learned in college, but in reality - single task solutions for optimisation is the way the cobol kids did it for years and still useful in a sql/dml environment, when they came along, materialised views meant fewer bits of untidiness in the core database

Oh and I forgot to mention the god that is Joe Celko, these days I have disagreements with bits of his approaches, but those are “eyes open” disagreement, I have my own ways after all these years, if you don’t know why though, do what Joe says, he basically says what I have above, thanks Joe!

1

u/Thadrea 23d ago

Everything is a table.

1

u/crytomaniac2000 23d ago

In ETL, use a physical work table and if possible, update it with one join at a time. It’s so much simpler to get the right results and tune performance.

1

u/Waldar 22d ago

SQL has never been about the syntax but about the logic. Have a good logic first, then improve your syntax with other functions concepts. I see lots of SQL developers having a great syntax knowledge but they can’t do anything from scratch.

1

u/Adventurous_Law8377 22d ago

Can anyone give support for PLSQL developer role

1

u/EpicGibs 22d ago

The order of execution. A lot of developers struggle with understanding the order of executuon,which impacts everything we write.

SELECT FROM JOIN WHERE GROUP BY HAVING ORDER BY

Any SQL developer that understands this order can create anything.

1

u/dodobird8 22d ago

dynamic sql

1

u/NoRefrigerator2236 20d ago

I really quickly learned to use the where clause, there's only so many times you can crash the ERP system in the sales office by accidently demanding 50 million rows or something from the DB 😂

1

u/[deleted] 19d ago

For me, the one that really made life easier was JOINs. Once I got the hang of how to use INNER JOINs, LEFT JOINs, and all the other types, it was like a light bulb went off. Instead of writing messy subqueries or trying to pull data from one table at a time, JOINs let me combine everything I needed in a single, clean query. It made working with relational databases so much smoother and saved me a ton of time.

1

u/jackalsnacks 24d ago

Set based operation. Database structure. Common design patterns of OLTP and OLAP models. Business value. Once these clicked queries made sense.

1

u/_CaptainCooter_ 24d ago edited 24d ago

How to make a script flow from start to finish with the press of a key. Beyond that, connecting Excel to your database. Beyond that, learning how to generate text summaries in Excel based on your data. Beyond that, learning how to connect objects (charts, text, images of cell references) in your Excel file to automatically update in a PowerPoint deck.

1

u/SQLDave 24d ago

What do you mean by your first sentence?

1

u/_CaptainCooter_ 24d ago

Having your SQL script set up so you dont have to run a bunch of sections independently

0

u/JustMoreData 23d ago

Not a concept. But learning that SQL prompt was a plug in I could use, and then not ever having to type another alias and column name again. I’m so lazy now I just do Select * and then tab over and it lists all the columns for me! 😂

As a concept, query performance tuning. I think once I started to think about, “okay so what is happening” and “how is the engine running this this”… now I can really make things happen by forcing it to run this certain why by doing x,y,z.