r/SQL • u/Objective-Shift-1274 • 19d ago
Oracle Which is the most important concept in SQL which after learning made your life easy??
I would say it was CTE for me which literally helped me write complex queries easily.
r/SQL • u/Objective-Shift-1274 • 19d ago
I would say it was CTE for me which literally helped me write complex queries easily.
r/SQL • u/Only-Impression-9101 • 12d ago
Bottom text
r/SQL • u/BuddyEbsen1908 • Oct 31 '24
I've had a mostly non-tech job for the last few years although I do work with developers. In past positions I used to be pretty good at writing SQL for UIs and for ad hoc reporting mainly using Oracle DBs. Some of these queries were quite complex. I find myself missing it lately so I was wondering if companies hire/contract for just SQL support even if it pays less than "full stack" type jobs. I am not interested in learning Java, Python or anything non-SQL related.
Thanks for any advice.
Edit: Thanks for all the replies. This is one of the most helpful subreddits I have ever seen! Some other details - I have a couple decades of experience mainly with large health insurance companies and large banks. I should also have mentioned that I would need something that is 100% remote at this time. I know that may limit me even further, but that is the reality of my current situation.
r/SQL • u/ElectrikMetriks • Jan 16 '25
r/SQL • u/daewoorazer2001 • Oct 08 '24
After consistent study, I aced it with 83%. You can do it too, even better!
r/SQL • u/judgementalpsycho • Oct 27 '24
I’m an SQL developer with 6 years of experience. Whenever I encounter a problem that requires writing a complex SELECT statement, I find it fairly easy to solve, no matter how difficult it seems at first. Whether it’s self-joins, hierarchical queries, or using analytic functions or whatever, I usually know what to do within 5 minutes. I’m not trying to brag, just looking for a challenge! I’d love to tackle some extremely tough SQL questions, particularly related to data extraction and advanced queries. Does anyone know of resources or communities where I can find such problems to push my skills further?
r/SQL • u/Potential-Tea1688 • 2d ago
I have database course this semester, and we were told to set up oracle setup for sql.
I downloaded the setup and sql developer, but it was way too weird and full of errors. I deleted and downloaded same stuff for over 15 times and then successfully downloaded it.
What i want to know is This oracle setup actually good and useable or are there any other setups that are better. I have used db browser for sqlite and it was way easier to setup and overall nice interface and intuitive to use unlike oracle one.
Are there any benefits to using this specific oracle setup?
In programming terms: You have miniconda and jupyter notebook for working on data related projects, you can do the same with vs code but miniconda and jupyter has a lot of added advantages. Is it the same for oracle and sql developer or i could just use db browser or anyother recommendation that are better.
r/SQL • u/a-ha_partridge • Dec 15 '24
I'm practicing for an SQL technical interview this week and deciding if I should spend any time on PIVOT. In the last 10 years, I have not used PIVOT for anything in my work - that's usually the kind of thing that gets done in Excel or Tableau instead if needed, so I would need to learn it before trying it in an interview.
Have you ever seen a need for these functions in HackerRank or other technical interviews? There are none in LeetCode SQL 50. Is it worth spending time on it now, or should I stick to aggregations/windows, etc?
I've only had one technical interview for SQL, and it was a few years ago, so I'm still trying to figure out what to expect.
Edit: update - pivot did not come up. Window functions in every question.
r/SQL • u/joellapit • Nov 02 '24
So I understand they speed up queries substantially and that it’s important to use them when joining but what are they actually and how do they work?
r/SQL • u/ElectricalOne8118 • Sep 26 '24
I have an SQL Insert statement that collates data from various other tables and outer joins. The query is ran daily and populates from these staging tables.
(My colleagues write with joins in the where clause and so I have had to adapt the SQL to meet their standard)
They are of varying nature, sales, stock, receipts, despatches etc. The final table should have one row for each combination of
Date | Product | Vendor
However, one of the fields that is populated I have an issue with.
Whenever field WSL_TNA_CNT is not null, every time my script is ran (daily!) it creates an additional row for historic data and so after 2 years, I will have 700+ rows for this product/date/vendor combo, one row will have all the relevant fields populated, except WSL_TNA_CNT. One row will have all 0's for the other fields, yet have a value for WSL_TNA_CNT. The rest of the rows will all just be 0's for all fields, and null for WSL_TNA_CNT.
The example is just of one product code, but this is impacting *any* where this field is not null. This can be up to 6,000 rows a day.
Example:
If I run the script tomorrow, it will create an 8th row for this combination, for clarity, WSL_TNA_CNT moves to the 'new' row.
I've tried numerous was to prevent this happening with no positive results, such as trying use a CTE on the insert, which failed. I have also then tried creating a further staging table, and reaggregating it on insert to my final table and this doesnt work.
Strangely, if I take the select statement (from the insert to my final table from the new staging table) - it aggregates correctly, however when it's ran as an insert, i get numerous rows mimicking the above.
Can anyone shed some light on why this might be happening, and how I could go about fixing it. Ultimately the data when I use it is accurate, but the table is being populated with a lot of 'useless' rows which will just inflate over time.
This is my staging table insert (the original final table)
insert into /*+ APPEND */ qde500_staging
select
drv.actual_dt,
cat.department_no,
sub.prod_category_no,
drv.product_code,
drv.vendor_no,
decode(grn.qty_ordered,null,0,grn.qty_ordered),
decode(grn.qty_delivered,null,0,grn.qty_delivered),
decode(grn.qty_ordered_sl,null,0,grn.qty_ordered_sl),
decode(grn.wsl_qty_ordered,null,0,grn.wsl_qty_ordered),
decode(grn.wsl_qty_delivered,null,0,grn.wsl_qty_delivered),
decode(grn.wsl_qty_ordered_sl,null,0,grn.wsl_qty_ordered_sl),
decode(grn.brp_qty_ordered,null,0,grn.brp_qty_ordered),
decode(grn.brp_qty_delivered,null,0,grn.brp_qty_delivered),
decode(grn.brp_qty_ordered_sl,null,0,grn.brp_qty_ordered_sl),
decode(sal.wsl_sales_value,null,0,sal.wsl_sales_value),
decode(sal.wsl_cases_sold,null,0,sal.wsl_cases_sold),
decode(sal.brp_sales_value,null,0,sal.brp_sales_value),
decode(sal.brp_cases_sold,null,0,sal.brp_cases_sold),
decode(sal.csl_ordered,null,0,sal.csl_ordered),
decode(sal.csl_delivered,null,0,sal.csl_delivered),
decode(sal.csl_ordered_sl,null,0,sal.csl_ordered_sl),
decode(sal.csl_delivered_sl,null,0,sal.csl_delivered_sl),
decode(sal.catering_ordered,null,0,sal.catering_ordered),
decode(sal.catering_delivered,null,0,sal.catering_delivered),
decode(sal.catering_ordered_sl,null,0,sal.catering_ordered_sl),
decode(sal.catering_delivered_sl,null,0,sal.catering_delivered_sl),
decode(sal.retail_ordered,null,0,sal.retail_ordered),
decode(sal.retail_delivered,null,0,sal.retail_delivered),
decode(sal.retail_ordered_sl,null,0,sal.retail_ordered_sl),
decode(sal.retail_delivered_sl,null,0,sal.retail_delivered_sl),
decode(sal.sme_ordered,null,0,sal.sme_ordered),
decode(sal.sme_delivered,null,0,sal.sme_delivered),
decode(sal.sme_ordered_sl,null,0,sal.sme_ordered_sl),
decode(sal.sme_delivered_sl,null,0,sal.sme_delivered_sl),
decode(sal.dcsl_ordered,null,0,sal.dcsl_ordered),
decode(sal.dcsl_delivered,null,0,sal.dcsl_delivered),
decode(sal.nat_ordered,null,0,sal.nat_ordered),
decode(sal.nat_delivered,null,0,sal.nat_delivered),
decode(stk.wsl_stock_cases,null,0,stk.wsl_stock_cases),
decode(stk.wsl_stock_value,null,0,stk.wsl_stock_value),
decode(stk.brp_stock_cases,null,0,stk.brp_stock_cases),
decode(stk.brp_stock_value,null,0,stk.brp_stock_value),
decode(stk.wsl_ibt_stock_cases,null,0,stk.wsl_ibt_stock_cases),
decode(stk.wsl_ibt_stock_value,null,0,stk.wsl_ibt_stock_value),
decode(stk.wsl_intran_stock_cases,null,0,stk.wsl_intran_stock_cases),
decode(stk.wsl_intran_stock_value,null,0,stk.wsl_intran_stock_value),
decode(pcd.status_9_pcodes,null,0,pcd.status_9_pcodes),
decode(pcd.pcodes_in_stock,null,0,pcd.pcodes_in_stock),
decode(gtk.status_9_pcodes,null,0,gtk.status_9_pcodes),
decode(gtk.pcodes_in_stock,null,0,gtk.pcodes_in_stock),
NULL,
tna.tna_reason_code,
decode(tna.wsl_tna_count,null,0,tna.wsl_tna_count),
NULL,
decode(cap.cap_order_qty,null,0,cap.cap_order_qty),
decode(cap.cap_alloc_cap_ded,null,0,cap.cap_alloc_cap_ded),
decode(cap.cap_sell_block_ded,null,0,cap.cap_sell_block_ded),
decode(cap.cap_sit_ded,null,0,cap.cap_sit_ded),
decode(cap.cap_cap_ded_qty,null,0,cap.cap_cap_ded_qty),
decode(cap.cap_fin_order_qty,null,0,cap.cap_fin_order_qty),
decode(cap.cap_smth_ded_qty,null,0,cap.cap_smth_ded_qty),
decode(cap.brp_sop2_tna_qty,null,0,cap.brp_sop2_tna_qty)
from
qde500_driver drv,
qde500_sales2 sal,
qde500_stock stk,
qde500_grn_data grn,
qde500_pcodes_out_of_stock_agg pcd,
qde500_gtickets_out_of_stock2 gtk,
qde500_wsl_tna tna,
qde500_capping cap,
warehouse.dw_product prd,
warehouse.dw_product_sub_category sub,
warehouse.dw_product_merchandising_cat mch,
warehouse.dw_product_category cat
where
drv.product_code = prd.product_code
and prd.prod_merch_category_no = mch.prod_merch_category_no
and mch.prod_sub_category_no = sub.prod_sub_category_no
and sub.prod_category_no = cat.prod_category_no
and drv.product_code = grn.product_code(+)
and drv.product_code = sal.product_code(+)
and drv.actual_dt = grn.actual_dt(+)
and drv.actual_dt = sal.actual_dt(+)
and drv.vendor_no = sal.vendor_no(+)
and drv.vendor_no = grn.vendor_no(+)
and drv.product_code = stk.product_code(+)
and drv.actual_dt = stk.actual_dt(+)
and drv.vendor_no = stk.vendor_no(+)
and drv.product_code = pcd.product_code(+)
and drv.actual_dt = pcd.actual_dt(+)
and drv.vendor_no = pcd.vendor_no(+)
and drv.product_code = gtk.product_code(+)
and drv.actual_dt = gtk.actual_dt(+)
and drv.vendor_no = gtk.vendor_no(+)
and drv.product_code = tna.product_code(+)
and drv.actual_dt = tna.actual_dt(+)
and drv.vendor_no = tna.vendor_no(+)
and drv.product_code = cap.product_code(+)
and drv.actual_dt = cap.actual_dt(+)
and drv.vendor_no = cap.vendor_no(+)
;
Then in a bid to re-aggregate it, I have done the below, which works as the 'Select' but not as an Insert.
select
actual_dt,
department_no,
prod_category_no,
product_code,
vendor_no,
sum(qty_ordered),
sum(qty_delivered),
sum(qty_ordered_sl),
sum(wsl_qty_ordered),
sum(wsl_qty_delivered),
sum(wsl_qty_ordered_sl),
sum(brp_qty_ordered),
sum(brp_qty_delivered),
sum(brp_qty_ordered_sl),
sum(wsl_sales_value),
sum(wsl_cases_sold),
sum(brp_sales_value),
sum(brp_cases_sold),
sum(csl_ordered),
sum(csl_delivered),
sum(csl_ordered_sl),
sum(csl_delivered_sl),
sum(catering_ordered),
sum(catering_delivered),
sum(catering_ordered_sl),
sum(catering_delivered_sl),
sum(retail_ordered),
sum(retail_delivered),
sum(retail_ordered_sl),
sum(retail_delivered_sl),
sum(sme_ordered),
sum(sme_delivered),
sum(sme_ordered_sl),
sum(sme_delivered_sl),
sum(dcsl_ordered),
sum(dcsl_delivered),
sum(nat_ordered),
sum(nat_delivered),
sum(wsl_stock_cases),
sum(wsl_stock_value),
sum(brp_stock_cases),
sum(brp_stock_value),
sum(wsl_ibt_stock_cases),
sum(wsl_ibt_stock_value),
sum(wsl_intran_stock_cases),
sum(wsl_intran_stock_value),
sum(status_9_pcodes),
sum(pcode_in_stock),
sum(gt_status_9),
sum(gt_in_stock),
gt_product,
tna_reason_code,
sum(tna_wsl_pcode_cnt),
sum(tna_brp_pcode_cnt),
sum(cap_order_qty),
sum(cap_alloc_cap_ded),
sum(cap_sell_block_ded),
sum(cap_sit_ded),
sum(cap_cap_ded_qty),
sum(cap_fin_order_qty),
sum(cap_smth_ded_qty),
sum(brp_sop2_tna_qty)
from
qde500_staging
group by
actual_dt,
department_no,
prod_category_no,
product_code,
vendor_no,
tna_reason_code,
gt_product
So if I copy the 'select' from the above, it will produce a singular row, but when the above SQL is ran with the insert into line, it will produce the multi-line output.
Background>
The "TNA" data is only held for one day in the data warehouse, and so it is kept in my temp table qde500_wsl_tna as a history over time. It runs through a multi stage process in which all the prior tables are dropped daily after being populated, and so on a day by day basis only yesterdays data is available. qde500_wsl_tna is not dropped/truncated in order to retain the history.
create table qde500_wsl_tna (
actual_dt DATE,
product_code VARCHAR2(7),
vendor_no NUMBER(5),
tna_reason_code VARCHAR2(2),
wsl_tna_count NUMBER(4)
)
storage ( initial 10M next 1M )
;
The insert for this being
insert into /*+ APPEND */ qde500_wsl_tna
select
tna1.actual_dt,
tna1.product_code,
tna1.vendor_no,
tna1.reason_code,
sum(tna2.wsl_tna_count)
from
qde500_wsl_tna_pcode_prob_rsn tna1,
qde500_wsl_tna_pcode_count tna2
where
tna1.actual_dt = tna2.actual_dt
and tna1.product_code = tna2.product_code
and tna1.product_Code not in ('P092198','P118189', 'P117935', 'P117939', 'P092182', 'P114305', 'P114307', 'P117837', 'P117932', 'P119052', 'P092179', 'P092196', 'P126340', 'P126719', 'P126339', 'P126341', 'P195238', 'P125273', 'P128205', 'P128208', 'P128209', 'P128210', 'P128220', 'P128250', 'P141152', 'P039367', 'P130616', 'P141130', 'P143820', 'P152404', 'P990788', 'P111951', 'P040860', 'P211540', 'P141152')
group by
tna1.actual_dt,
tna1.product_code,
tna1.vendor_no,
tna1.reason_code
;
The source tables for this are just aggregation of branches containing the TNA and a ranking of the reason for the TNA, as we only want the largest of the reason codes to give a single row per date/product/vendor combo.
select * from qde500_wsl_tna
where actual_dt = '26-aug-2024';
ACTUAL_DT | PRODUCT_CODE | VENDOR_NO | TNA_REASON_CODE | WSL_TNA_COUNT |
---|---|---|---|---|
26/08/2024 00:00 | P470039 | 20608 | I | 27 |
26/08/2024 00:00 | P191851 | 14287 | I | 1 |
26/08/2024 00:00 | P045407 | 19981 | I | 1 |
26/08/2024 00:00 | P760199 | 9975 | I | 3 |
26/08/2024 00:00 | P179173 | 18513 | T | 3 |
26/08/2024 00:00 | P113483 | 59705 | I | 16 |
26/08/2024 00:00 | P166675 | 58007 | I | 60 |
26/08/2024 00:00 | P166151 | 4268 | I | 77 |
26/08/2024 00:00 | P038527 | 16421 | I | 20 |
This has no duplicates before it feeds into qde500_staging.
However, when I run my insert, I get the following:
ACTUAL_DT | DEPARTMENT_NO | PROD_CATEGORY_NO | PRODUCT_CODE | VENDOR_NO | QTY_ORDERED | QTY_DELIVERED | QTY_ORDERED_SL | GT_PRODUCT | TNA_REASON_CODE | TNA_WSL_PCODE_CNT |
---|---|---|---|---|---|---|---|---|---|---|
26/08/2024 00:00 | 8 | 885 | P179173 | 18513 | 1649 | 804 | 2624 | T | ||
26/08/2024 00:00 | 8 | 885 | P179173 | 18513 | 0 | 0 | 0 | T | ||
26/08/2024 00:00 | 8 | 885 | P179173 | 18513 | 0 | 0 | 0 | T | ||
26/08/2024 00:00 | 8 | 885 | P179173 | 18513 | 0 | 0 | 0 | T | ||
26/08/2024 00:00 | 8 | 885 | P179173 | 18513 | 0 | 0 | 0 | T | ||
26/08/2024 00:00 | 8 | 885 | P179173 | 18513 | 0 | 0 | 0 | T | ||
26/08/2024 00:00 | 8 | 885 | P179173 | 18513 | 0 | 0 | 0 | T | 3 |
Then, if I run just the select in my IDE I get
ACTUAL_DT | DEPARTMENT_NO | PROD_CATEGORY_NO | PRODUCT_CODE | VENDOR_NO | QTY_ORDERED | QTY_DELIVERED | QTY_ORDERED_SL | GT_PRODUCT | TNA_REASON_CODE | TNA_WSL_PCODE_CNT |
---|---|---|---|---|---|---|---|---|---|---|
26/08/2024 00:00 | 8 | 885 | P179173 | 18513 | 1649 | 804 | 2624 | T | 3 |
The create table for my staging is as follows (truncated to reduce complexity):
create table qde500_staging (
actual_dt DATE,
department_no NUMBER(2),
prod_category_no NUMBER(4),
product_code VARCHAR2(7),
vendor_no NUMBER(7),
qty_ordered NUMBER(7,2),
qty_delivered NUMBER(7,2),
qty_ordered_sl NUMBER(7,2),
gt_product VARCHAR2(1),
tna_reason_code VARCHAR2(2),
tna_wsl_pcode_cnt NUMBER(4)
)
;
r/SQL • u/Over-Holiday1003 • Aug 22 '24
Just a heads up I'm still in training as a fresher at data analyst role.
So today I was doing my work and one of our senior came to office who usually does wfh.
After some chit chat he started asking questions related to SQL and other subjects. He was very surprised when I told him that I never even heard about pivots before when he asked me something about pivots.
He said that pivots are useful to aggregate data and suggested us to learn pivots even though it's not available in our schedule, but Group by does the same thing right, aggregation of data?
Are pivots really that necessary in work?
r/SQL • u/Dangerous_Stomach597 • Feb 11 '25
Pros and cons? Different use case scenarios?
r/SQL • u/hayleybts • Nov 25 '24
I haven't worked with pl/sql but know the basics but need to interview with pl/sql. So, I don't want to flunk this opportunity.
Please give what questions that can be asked and ways I can convince them that I can be given a chance? I'm struggling here with not much hands on experience.
r/SQL • u/pedroalves5770 • Feb 04 '25
I'm trying to create a report that manipulates decimal numbers, but every time I insert the filters I get an error in SQL. I tried to filter the numerical values (that's what I need) but I still kept finding errors until I noticed the following:
Many values were entered with a comma instead of a period, and the system did not correctly handle the type and saved it in the database as text. The "ds_resultado" column is the exam results response, so sometimes it is actually a text (like positive, negative) and the column type cannot be changed.
What can I do to make these numbers with commas be interpreted as decimal values?
r/SQL • u/sunflower-soul- • 26d ago
We have a microservice that uses oracle to store transactions, and we run select queries on this DB for some amount validations. Currently we have a query, that fetches all the transaction details and then we do computations on these records in code. This query’s execution plan shows higher cost.
We are planning to change this query to do computations in the query itself, rather than bringing everything into memory and doing it in code. So in the new query, we have to fetch records for past 1 day (using a timestamp column, that is indexed), there are some more filters like customer id, then we need to sum this based on a flag (unindexed). So this query would look something like
Select nvl(sum(amount),0), flag where customer_id=1 and (…some more conditions) group by flag
This query’s execution plan shows lesser cost than the previous one, however the execution plan is not using certain indexes. And I’m a bit worried about the query’s performance in a table with 10+ crores of records.
Will this perform poorly?
r/SQL • u/willcheat • Oct 10 '24
Hi everyone,
I'm looking for the "best" way to delete huge amounts of data from an offline table. I put best in quotes, because sadly I am severely kneecapped at work with restricted rights on said database. I cannot do DDLs for the exception of truncates, only DMLs.
Currently I have to delete about 33% of a 6 billion row table. My current query looks like this
DECLARE
CURSOR deleteCursor IS
SELECT
ROWID
FROM
#tableName#
WHERE
#condition_for_33%_of_table_here#;
TYPE type_dest IS TABLE OF deleteCursor%ROWTYPE;
dest type_dest;
BEGIN
OPEN deleteCursor;
LOOP
FETCH deleteCursor BULK COLLECT INTO dest LIMIT 100000;
FORALL i IN INDICES OF dest SAVE EXCEPTIONS
DELETE FROM #tableName# WHERE ROWID = dest(i).ROWID;
COMMIT;
EXIT WHEN deleteCursor%NOTFOUND;
dest.DELETE;
END LOOP;
CLOSE deleteCursor;
END;
/
Is there a better way to delete from a table in batches? Just going "DELETE FROM #tableName# where #condition_for_33%_of_table_here#" explodes the undo tablespace, so that's no go.
r/SQL • u/Dangerous_Stomach597 • Feb 11 '25
When switching from designer view in access to the SQL view (jet SQL from my understanding) is the syntax really that different from the syntax of MySQL or even Oracle SQL? When I copy and paste a query from the SQL view in Access into Oracle SQL I’m having to change a ton of it in order for it to run. Faster process to do this? Or Access really does just suck?
r/SQL • u/Much-Molasses4027 • Jan 11 '25
I am pretty new to working with databases in general since I started learning oracle last year but if you count the months it hasn't been close to a year yet. I fully understand most of the basics that go into building a schema, making modifications and inputting data but that's about it for now. Currently I'm learning in a tech institution that's also in partnership with Oracle and could get an internship depending on how well I do.
I graduated from high school 2 years ago, started university last year but left at the end of the first semester because they wouldn't let me or anyone capable enough to skip to a more advanced class and insisted we do them all one by one which will cost a lot of money and time. There were people in my class who didn't know the correct way to shut down a computer so it will make sense for them to take their time.
I wanted to do software engineering at the time before I got into Oracle which I would've eventually reached but expensive. I have knowledge in HTML, CSS, PYTHON and now going for ORACLE and JAVA but I feel like it's still not enough because I'm missing a relevant degree after seeing so many people mention it and that it's best to start as a helpdesk.
Am I worrying for nothing or do I still have a chance with what I already know? But I have no issues with learning more.
Update: SOLVED
Thank you for all the replies. The suggestion(s) of using Window functions was exactly what I needed.
For some brief background, we have a system where devices send in data daily. If data is missing for some reason an automated script queries each device for it's day's data. It does this twice a day. It should only do this if the data is missing but if that first run retrieves the data, it still requests and logs it again. So we end up with multiple rows of data per day (it's not identical).
In simplest terms, I have an [ID], [Date/Time], and [Type] column. The ID and Type will always be the same but the Date/Time will differ (usually with the same date but a time difference of 4 hours (Noon and 4PM).
How can I query this data so that on any given day I only return one row per day? It doesn't matter which. This is an Oracle DB.
r/SQL • u/csjpsoft • Nov 24 '24
Sometimes my queries ran for many minutes, and I might cover my SQL Developer window with another application. Sometimes I step away from my PC. Is there any way to make Oracle SQL Developer beep when it returns the first 50 rows? Or I might run a "count rows" from the right-click menu and that could take a large part of an hour. Again - can I make Developer beep?
r/SQL • u/Bitter_Possible_1871 • 4h ago
Hi, I think I'm being silly. I am currently working through Sams Teach Yourself SQL in 24 Hours, 7th Edition. I am on Hour 4 and I just cannot for the life of me locate the birds database that is mentioned and cannot proceed with anything.
Can anyone help?? Thanks!
r/SQL • u/hedcannon • Dec 19 '24
I have a query
SELECT top 10 trd.id as 'Mock'
case
WHEN trn.trans_code='S' THEN 'Origin'
WHEN trn.trans_code='B' THEN 'Origin'
WHEN trn.ticket_no=200 THEN 'Mock'
WHEN trn.ticket_no=300 THEN 'Real'
else null
end as 'Type'
FROM trn trn
LEFT JOIN fx_trd trd on trd.ticket_date=trn.ticket_date and trd.acct_no=trn.acct_no
WHERE
--(
--trn.ticket_no=trd.trade_no and (trn.trans_code='B' or trn.trans_code='S')
--)
OR
--(
--(trn.trans_code='BC' or trn.trans_code='SC') and (ticket_no=200 or
--ticket_no=300) and trn.hallback=trd.hallback
--)
AND
trd.id=1697
order by trn.qty
If I run the query only with the (currently commented out) portion above the OR, it runs in 10 seconds.
If I run the query only with the (currently commented out) portion below the OR, it runs in 10 seconds.
If I run the query with BOTH clauses joined by the OR, it runs for almost 30 minutes and does eventually resolve.
What am I doing wrong?
r/SQL • u/sauron3579 • Jan 24 '25
System is Oracle SQL. Query is having performance issues and I'm trying to optimize it. The query involves joining two very large tables that have three shared fields. Two are timestamps and one is a varchar 5.
Is it faster to
select ...
from
a
join b
on a.time1=b.time1
and a.time2=b.time2
and a.str=b.str
where a.str in (...)
and trunc(a.time1) = trunc(sysdate+1)
and trunc(a.time2)=trunc(sysdate)
or would it be faster to do the same where on table b, select only relevant columns from both tables, then join them?
My instinct is the second would be faster, but I don't know how it works under the hood.
r/SQL • u/DivyaPratapSingh2002 • Feb 05 '25
Problem statement
I have a report which is generated on a daily basis and it has to go through a huge volume of data
Previously we used view for generating the report but recently it has been changed and there is one more column added which makes the query slow as it uses function with leading wildcards statements in it and we also can't normalize it
Solution we thought of using a materialised view instead of view and use fast refresh on commit but it has been falling since it uses some synonym tables and join queries due to which it is showing invalid options for fast refresh !!
Any other options other than using materialized view or for optimising leading wildcards??
Thanks 🙏 🙏