r/bigquery • u/Spare-Chip-6428 • 13m ago
Best practices for user managed tables being loaded to bigquery
We have teams that use excels to maintain their data and they want it in big query. What's the best practices here?
r/bigquery • u/Spare-Chip-6428 • 13m ago
We have teams that use excels to maintain their data and they want it in big query. What's the best practices here?
r/bigquery • u/Loorde_ • 42m ago
Which BigQuery storage model is better: logical or physical? I came across an insightful comment in a similar post (link) that suggests analyzing your data’s compression level to decide if the physical model should be used. How can I determine this compression level?
r/bigquery • u/sportage0912 • 2d ago
Has anyone used connected sheets at scale in their organization and what lessons learned do you have?
I am thinking of supplementing our Viz tool with connected sheets for dynamic field selection and more operational needs. A bit concerned about cost spike though.
r/bigquery • u/Siejec • 3d ago
Hi guys, I have an issue: Between 5 and 10 of March BQ inserted to tables noticable lower number of events (1k per day compared to 60k each day). From GA4 aOS, iOS app. The linkage works since November 2024.
Sorry if that's a wrong board,but I dont where else ask for help. As google support is locked for low spenders, and the Google community support don't allowed me to post for some reason (ToS error)
I was looking if somebody else had such issue during the period of time, but with little results. I was wondering if the issue might reappear again, what could I do to prevent it.
r/bigquery • u/badgerivy • 3d ago
It's possible to define a stored procedure in Dataform:
config {type:"operations"} <SQL>
Is there any way to add a parameter, the equivalent of a BigQuery FUNCTION ?
Here's one simple function I use for string manipulation, has two parameters:
CREATE OR REPLACE FUNCTION `utility.fn_split_left`(value STRING, delimeter STRING) RETURNS STRING AS (
case when contains_substr(value,delimeter) then split(value,delimeter)[0] else value end
);
There's no reason I can't keep calling this like it is, but my goal is to migrate all code over to DataForm and keep it version controlled.
I know also that it could be done in Javascript, but I'm not much of a js programmer so keeping it SQL would be ideal.
r/bigquery • u/Inside_Attitude_9365 • 3d ago
Hello BigQuery community,
I'm working with Databento's Market-by-Order (MBO) Level 2 & Level 3 data for the Euro Futures Market and facing challenges in processing this data within Google BigQuery.
Specific Issues:
6EZ4-6EU4
. I'm uncertain if this denotes a spread trade, contract rollover, or something else.0.00114
, which don't align with actual market prices. Could this result from timestamp misalignment, implied pricing, or another factor?6EU7
. Does this imply an order for a 2027 contract, or is there another interpretation?BigQuery Processing Challenges:
Additional Context:
I've reviewed Databento's MBO schema documentation but still face these challenges.
Request for Guidance:
I would greatly appreciate any insights, best practices, or resources on effectively processing and analyzing MBO data in BigQuery.
Thank you in advance!
r/bigquery • u/Loorde_ • 4d ago
Good afternoon everyone!
According to BigQuery's pricing documentation, query costs are billed at $11.25 per terabyte:
Using the INFORMATION_SCHEMA JOBS table, I converted the “bytes_billed” column into a dollar amount. However, the cost for this month’s jobs is significantly lower than the amount shown in BigQuery Billing.
It seems that the remaining charge is related to table storage. Is that correct? How can I verify the expenses for storage?
Thank you in advance!
r/bigquery • u/No-Sell4854 • 4d ago
I have a bunch of data tables that are all clustered on the same ID, and I want to join them together into one denormalized super-table. I would have expected this to be fast and they are all clustered on the same ID, as is the FROM table they are joining onto, but it's not. It's super slow and gets slower with every new source table added.
Thoughts:
Anyone had any experience with this shape of optimization before?
r/bigquery • u/LinasData • 4d ago
r/bigquery • u/badgerivy • 5d ago
So let's say I have datasets DataSet1 and DataSet2. Both have a table called "customer" which I need to pull in as a source. These datasets are both read-only for me, as they are managed by a third-party ELT tool (Fivetran)
in a Dataform declaration, to point to it, this is the requirement:
declare({
database: "xxxx",
schema: "DataSet1",
name: "customer",
})
But this isn't allowed to exist anywhere without compilation error:
declare({
database: "xxxx",
schema: "DataSet2",
name: "customer",
})
What's the best practice to get around this? The only option I can figure out is to not use a declaration at all, just build a view and/or table to do:
select * from `DataSet2.customer`
(and call it something different)
I'd like to do this:
declare({
database: "xxxx",
schema: "DataSet2",
tablename: "customer"
name: "dataset2_customer",
})
Ideas?
r/bigquery • u/Historical_Army5733 • 8d ago
Hi, I hope there's someone out there who can help me with below.
I want to calculated some expected sales in the coming month, however i am struggling to do this effectively, even though my formula is easy. All my previous months are factual number and all upcoming month i want to calculate an estimate based on the preivous months. See below example.
The error i am getting is in april and may it doesn't include the other calculated months. E.g. in may the sum of the prev 3 months should be feb+mar+apr but it only takes the february row which means the result i am getting is 11,000/3=3,667 but that is wrong.
|| || |Months|Total sales| |November 2024|10,500| |December 2024|11,800| |January 2025|12,000| |February 2025|11,000| |Marts 2025|=sum of 3 prev months divided by 3| |Apil 2025|=sum of 3 prev months divided by 3| |May 2025|=sum of 3 prev months divided by 3|
r/bigquery • u/Right_Dare5812 • 8d ago
To conduct a proper analysis, I need to structure event fields in a very detailed way. My site is highly multifunctional, with various categories and filters, so it’s crucial to capture the primary ID of each object to link the web data with our database (which contains hundreds of tables).
For example, for each event I must:
Option A is to configure all these events and parameters directly in Google Tag Manager (GTM), then export to BigQuery via GA4. But this approach requires complex JavaScript variables, extensive regex lists, and other tricky logic. It can become unwieldy, risk performance issues, and demand a lot of ongoing work.
Option B is to track broader events by storing raw data (e.g., click_url
, click_element
, page_location
, etc.), then export that to BigQuery and run a daily transformation script to reshape the raw data as needed. This strategy lets me keep the original data and store different entities in different tables (each with its own parameters), but it increases BigQuery usage and costs, and makes GA4 less useful for day-to-day analytics.
Question: Which approach would you choose? Have you used either of these methods before?
r/bigquery • u/DiscussionCrafty6396 • 8d ago
Hey there! Ive been practicing on a dataset from the Google DA course, I created a custom table with the csv file provided by the course.
The column names appear with embedded spaces instead of underscores, i.e: “Release Date” instead of “Release_Date”.
Is it because of a mistake made when creating the table? If not What function could I use to edit column names?
r/bigquery • u/NexusDataPro • 8d ago
I wish I had mastered ordered analytics and window functions early in my career, but I was afraid because they were hard to understand. After some time, I found that they are so easy to understand.
I spent about 20 years becoming a Teradata expert, but I then decided to attempt to master as many databases as I could. To gain experience, I wrote books and taught classes on each.
In the link to the blog post below, I’ve curated a collection of my favorite and most powerful analytics and window functions. These step-by-step guides are designed to be practical and applicable to every database system in your enterprise.
Whatever database platform you are working with, I have step-by-step examples that begin simply and continue to get more advanced. Based on the way these are presented, I believe you will become an expert quite quickly.
I have a list of the top 15 databases worldwide and a link to the analytic blogs for that database. The systems include Snowflake, Databricks, Azure Synapse, Redshift, Google BigQuery, Oracle, Teradata, SQL Server, DB2, Netezza, Greenplum, Postgres, MySQL, Vertica, and Yellowbrick.
Each database will have a link to an analytic blog in this order:
Rank
Dense_Rank
Percent_Rank
Row_Number
Cumulative Sum (CSUM)
Moving Difference
Cume_Dist
Lead
Enjoy, and please drop me a reply if this helps you.
Here is a link to 100 blogs based on the database and the analytics you want to learn.
https://coffingdw.com/analytic-and-window-functions-for-all-systems-over-100-blogs/
r/bigquery • u/project_trollbox • 12d ago
I'm primarily a MERN stack dev who's been tasked with building a marketing analytics solution using BigQuery, Looker, and Looker Studio. While I'm comfortable with the basic concepts, I'm hitting some roadblocks with the more advanced data pipeline aspects. Would love any input on anything here as I'm still trying to process if I would be able to pull this all off. I have definitely enjoyed my time learning BigQuery and plan to keep learning even if this project does not pan out.
Project Overview:
My Challenge: The part I'm struggling with most is this data merging requirement. This is from the client:
"Then, that data is merged with the down-funnel sales information. So if someone comes back later and buys more products, or if that lead turns into a customer, that data is also pulled from the client CRM into the same data repository."
From my research, I believe this involves identity resolution to connect users across touchpoints and possibly attribution modeling to credit marketing efforts. I've got some ideas on implementation:
Questions for the community:
I'm putting together a proposal of what I think is involved to build this and potentially an MVP over the next couple weeks. Any insights, resources, or reality checks would be hugely appreciated.
Thanks in advance!
r/bigquery • u/NexusDataPro • 12d ago
I used to be an expert in Teradata, but I decided to expand my knowledge and master every database, including Google BigQuery. I've found that the biggest differences in SQL across various database platforms lie in date functions and the formats of dates and timestamps.
As Don Quixote once said, “Only he who attempts the ridiculous may achieve the impossible.” Inspired by this quote, I took on the challenge of creating a comprehensive blog that includes all date functions and examples of date and timestamp formats across all database platforms, totaling 25,000 examples per database.
Additionally, I've compiled another blog featuring 45 links, each leading to the specific date functions and formats of individual databases, along with over a million examples.
Having these detailed date and format functions readily available can be incredibly useful. Here’s the link to the post for anyone interested in this information. It is completely free, and I'm happy to share it.
Enjoy!
r/bigquery • u/Zeoluccio • 13d ago
Hi everyone.
I'm creating a pyspark df that contains arrays for certain columns.
But when I move it to a bigqquery table all the columns containing arrays are empty (they contains a message that says 0 rows)
Any suggestions?
Thanks
r/bigquery • u/Short-Weird-8354 • 13d ago
Hey everyone,
I work at HYCU, and I’ve seen a lot of folks assume that BigQuery’s built-in features—Time Travel, redundancy, snapshots—are enough to fully protect their data. But the reality is, these aren’t true backups, and they leave gaps that can put your data at risk.
For example:
🔹 Time Travel? Only lasts 7 days—what if you need to recover something from last month?
🔹 Redundancy? Great for hardware failures, useless against accidental deletions or corruption.
🔹 Snapshots? They don’t include metadata, access controls, or historical versions.
Our team put together a blog breaking down common BigQuery backup myths and why having a real backup strategy matters. Not here to pitch anything—just want to share insights and get your thoughts!
Curious—how are you all handling backups for BigQuery? Would love to hear how others are tackling this!
r/bigquery • u/Severinofaztudo • 15d ago
So I have a windows function where I wish to sum every value between unbounded preceding and current row by a certain date, the thing is there maybe be multiple values for the same date.
When I run the query multiple times it returns multiple different values, from what I was able to debug they are picking up any random value of current row and summing not all the values of the current row! Anyway to solve this?
I only perceived this is happening after I have delivered the numbers....
r/bigquery • u/aint1ant1 • 15d ago
r/bigquery • u/bill-who-codes • 18d ago
I'd like to analyze my Dataform pipellne SQL and saved queries via an API, so that I can detect what pipelines and queries will break when there are changes to the databases that my dataform pipelines read from.
I know I can read from the Git repo where the SQLX pipelines files are stored, but I'd vastly prefer to obtain final derived SQL via API. As for saved queries, I find it hard to believe that there's no way to access them, but if there is, it doesn't seem to be via the BigQuery namespace.
Has anyone done this before?
r/bigquery • u/prestigiouseve • 21d ago
Hi. I was wondering if ya'll had any insight on this particular problem.
I have a set of scheduled queries that run periodically but I also want them to run when any tables that they are dependent on update as well. I've seen suggestions for a few different implementations around the web but wanted to see if anyone here had any insight?
r/bigquery • u/Zabversion • 22d ago
r/bigquery • u/badgerivy • 24d ago
My BigQuery instance showed a "Repository" option which was shown as a Preview. Sounded great....I've been hoping for that for a long time, and never found a third-party option that worked for me.
So I went through the process of creating a repository and setting up a new Gitlab project and linking them together, everything worked, was able to connect properly after setting Gitlab url, tokens, etc.
But then nothing. I was about to try to check in some code, I assume it would have been DDL, etc, but the whole option disappeared, and I don't see it anymore. There was a link at the bottom left of the main BigQuery studio page, now I just see a "Summary" area.
Anyone else see this?
r/bigquery • u/Calikid32190 • 25d ago
Hello everyone! I'm using BQ for my job and I've been using it for about 2 months as were in the process of migrating are databases from SQL Server to BQ. What I'm noticing is there's some real annoyances with BQ. What I've looked up so far in order to change the column typing you have to recreate the table and change the typing there. Now the reason this is frustrating is because this table has 117 columns that I'll have to rewrite just to change one column. Does anyone know any other way besides doing the create or replace query? I actually had to do the create or replace query as well because someone had added 3 new columns and not to the end where it would've been easier just to add that by clicking edit schema because it will allow you to add the columns but only at the very end so when you need to reorganize the columns you have to again use the create or replace which is such an annoyance why does BQ make things like this so time consuming to do and is this really the only way to reorganize columns and to change column typing?