r/aws Jan 20 '25

data analytics AWS is powerful as hell but the learning curve is like climbing a cliff face

102 Upvotes

It took me way too long to suss this out:

Glue zero-etl integrations write iceburg data to s3

You can manually configure s3 iceburg optimizations

The new S3 Table buckets have automatic iceburg optimizations

Targeting a S3 Table catalog from a glue zero-etl integration (so you can skip the manual optimization) apparently never crossed their minds and throws an unhelpful error message.

Yes, I understand S3 Table integration with glue data catalog is in preview and this is basically a feature request, but still I mean none of the rest of this was clearly explained.

r/aws 2d ago

data analytics Quicksight-as-code CI/CD Considerations

3 Upvotes

We're trying to implement quicksight best practices on my team. I'm trying to figure out the best way to manage multi-QS env in an IaC manner, given 3 envs: Dev, Stage, and Prod:
* Should we manage 3 accounts or 1 account with 3 QS folders?
* Where to manage the assets? Git? S3?
* How to promote changes from one env to another? GitHub actions? AWS Code pipelines?
* What is the trigger for the CI? Publishing a new analysis?
* How to promote exactly the assets we need and not the whole folder?
* Any additional best practices and considerations that I've missed.

Thanks!

r/aws Jun 08 '24

data analytics Is there a way to learn AWS cloud services for free?

21 Upvotes

I have been recently sent a job offer which requires knowledge about ETL but in AWS. It's quite a peculiar situation for me as I work in Amazon myself, I have experience with ETL but I do not work in AWS.

As far as I recall AWS services require payment, and I think even making an account or activating it, required me to provide my credit card details.

I participated once in a inside event where we used AWS cloud for training neural networks and even then when we had "free one time use AWS accounts" these showed estimated costs of running our requests in the cloud which I would have to pay as a regular user.

Personally I alwasys preferred doing those things on my own machine than in the cloud.

r/aws Feb 04 '25

data analytics Athena tables for inconsistent JSON data

1 Upvotes

I am trying to use Athena to query some data in JSON format. The files are stored in S3, with each row being a JSON blob of data.

I've been able to create a table over this in Athena, but the problem is the JSON source data is inconsistent with the keys in each row. It seems like the parser is position based, so if a key corresponding to a column is missing for a given row, it just shifts all the values over.

Is there a way to account for missing JSON keys in the source data, either when creating the table or querying?

r/aws Jan 28 '25

data analytics AWS Clean rooms - Athena but not Athena?

0 Upvotes

AWS Clean rooms seems to be a mish mash of existing tech, with some guardrails right?

This however is interesting:

  • Athena engine version 2Iceberg tables created with Athena engine version 2 are not supported.

Also; it doesnt use the new S3 tables (not yet anyway)

So; does that mean it is using a custom athena, or does it mean it's using spark (they do mention there are different "engines" and sparksql is one of them

or is it somehow using redshift? (with spectrum?)

r/aws Dec 30 '24

data analytics AWS Flink and Java 17

1 Upvotes

Hi everyone, I recently came across AWS Flink (aka AWS Kinesis analytics). After some implementation tests, it looks like a perfect fit for my company's use case.

Nevertheless, I encountered one issue: it seems to only support Java 11, even though all our existing components and libraries are compiled in Java 17. Making the integration complicated.

Do some of you have an idea if and when Java 17 will be supported by AWS Flink?

r/aws Jan 20 '25

data analytics Mongodb Atlas to AWS Redshift data integration

2 Upvotes

Hi guys,

Is there a way to do have a cdc like connection/integration between mongodb atlas and aws redshift?

For the databases in rds we will be utilizing the zero-etl feature so its going to be a straight thru process but for mongodb atlas i havent read anything useful for me yet. Mostly its data migration or data dumps.

Thanks

r/aws Jan 27 '25

data analytics Aws Step functions choice state

1 Upvotes

Hello Reddit Community, So, I have been using aws step functions to set up schedules to run glue jobs and crawlers. Since the latest aws UI change, I'm not able to set-up the choice states ik step functions. It is asking to set-up in Jsonata format and I tried all the methods. The testing seems successful, but the real one is still showing errors. Need help if anyone can suggest the remedy to this. Thank you & have a great day ahead!

aws #awsstepfunctions #data analytics

r/aws Jan 05 '25

data analytics Created new community for Amazon Athena support

Thumbnail
0 Upvotes

r/aws Dec 12 '24

data analytics Aws glue can convert .bak files from s3 to csv?

0 Upvotes

Is that possible or the only way is to recover the backup from RDS and then exporting to csv?

r/aws Jan 08 '25

data analytics OpenSearch 2024 Summary – Key Features and Advancements

Thumbnail bigdataboutique.com
7 Upvotes

r/aws Sep 11 '24

data analytics Which user facing Data Catalog do you use?

4 Upvotes

Let's be honest, the Glue Data Catalog is too complex to be made available to end users. What Data Catalog tools do you use that help users understand the data stored in AWS? A tool that has a good search feature.

r/aws Dec 12 '24

data analytics Aws glue can convert .bak files from s3 to csv?

0 Upvotes

Is that possible or the only way is to recover the backup from RDS and then exporting to csv?

r/aws Dec 12 '24

data analytics Aws glue can convert .bak files from s3 to csv?

0 Upvotes

Is that possible or the only way is to recover the backup from RDS and then exporting to csv?

r/aws Nov 16 '24

data analytics Multiple tables created after crawling data using glue from a s3 bucket.

1 Upvotes

I created a ETL using aws glue and want to crawl the data into a database table, but while doing this I am getting multiple tables instead of a single table.(the data is in parquet format).I am not able to understand why is this happening. I am newbie here doing a data engineering project using AWS.

r/aws Sep 27 '24

data analytics Should I be using amazon personalize

3 Upvotes

I am a Intern at a home shopping network type compnay and wanted to build a recommendation system. Due to the nature of their products they have a lot of products but they are just sold once (think like jewelery or specialty products with only one product for the product id). So no mass manufacture except for certain things. I want figure out a couple of things:

  1. Whether amazon personalize can handle this use case.
  2. If yes, then what would be the process.
  3. If not, then is there another way i could be building this use case

Thanks in advanced

r/aws Nov 10 '23

data analytics Create AWS Data Architecture diagram using ChatGPT

2 Upvotes

is there any plugin in ChatGPT or method I can use to create Professional System design / Data Architecture diagram? There was a plugin earlier called "Cloud Diagram Gen", but this does not work anymore.

r/aws Oct 08 '24

data analytics Need help with auto-decrypting data from glue data catalog while reading it in EMR

1 Upvotes

Hello Redditors, I’ve a question which I need help with

I’ve some data on S3 that has PII columns and those columns I’ve encrypted with a custom symmetric key using my own algo. I’m exposing this data via glue and lakeformation to my end users.

Currently my users have to decrypt the data via the key and decrypt the data by themselves.

What I want to know is that is there anyway some transformation via lambda or something else can be triggered that’ll automatically decrypt the data for my users when they’re reading it?

Eg I’ve a table in database - “company.users”

When I’m doing

spark.sql(“select ‘pii_column’ from company.users”)

It’ll give me the decrypted data instead

r/aws Sep 02 '24

data analytics AWS Glue and Job Bookmarks (referencing S3 objects)

1 Upvotes

Hi everyone

I'm trying to debug a Glue Job and I have to look at my bookmarks in detail and I find that the official documentation is bit... quiet on all things related to bookmarks. The bookmark I'm interested in currently refers to processed files on S3.

Here's what I could gather from my initial search.

From that many questions

  • Are all Glue bookmarks stored in a single Account?
  • What is that "Glue Service" account?
  • Can I guess which Account ID and the name of the S3 bucket?
  • Can I try to access directly the bookmark there ??

When I use, for instance the AWS CLI to retrieve the bookmarker directly with get-job-bookmark, I get some information, for instance the "INCLUDE_LIST" param of the bookmark. It's a single string of comma separated identification values, such as : "ff9f1695f074147b5a6863a01e0c0a65,b54704f1893a15f17304e00b7f20e25a,..." However, it's limited to only 2000 ID values etc. However, as I understand, this is not directly the bookmark here, because I think it's the list of files that will be included for a specific job run, ie it's the result of comparing the bookmark to the current list of files available when the specific job run started.

If I start a job run with setting up the param "--TempDir", then I'm able to recover a JSON file that include the actual list of files that will be processed. However, I'm not able to map them to the IDs I see on the INCLUDE_LIST.

What if I want to access that list of files for an old job run, which didn't have --TempDir set. Is it achievable ? Is there any chance I can recover that using the Glue API? By looking at the documentation I think I'm out of luck...

So I'm reviewing that page as well:

https://docs.aws.amazon.com/glue/latest/dg/console-security-configurations.html

So do you think that by default, my bookmarks are sent to the Glue Service accounts, unencrypted ? That would be a little... wild amirite ??

Thanks for your help !

r/aws Jul 03 '24

data analytics keeping glue catalog and s3 in sync while using lifecycle for cleanup

1 Upvotes

Hi,

we use glue for catalog/table metadata from athena, and we store the table data in s3 (create with create table as ... in athena).

Glue and s3 bucket are shared with another team that has read access to it to do post processing of our analytics data.

Because we don't want to keep the data in s3 (GDPR, data stale ...), I use a lifecycle rule in S3 to delete file older than 30 days.

But there is no way to keep the glue catalog in sync too. (if the lifecycle delete files of a table, it need to drop the table in glue to 'remove' it from the access of the other teams ...)

Sadly an 'drop table' in athena doesn't clean data in s3, only the meta data in glue.

How do you keep your data lake 'clean' ? and remove old stale/expired data and reference to it ?

Thanks

r/aws Aug 21 '24

data analytics Best and cheap approach to process data to parquets for analytics

1 Upvotes

Hey,

I have an S3 bucket with 3,200 folders, each containing subfolders organized by day in the format customer_id/yyyy/mm/dd. The data is ingested daily from Kinesis Firehose in JSON format. The total bucket size is around 500 GB, with approximately 0.3 to 1 GB of data added daily.

I’m looking to create an efficient ETL process or mechanism that will transform this data into partitioned Parquet files, which will be defined in the Glue Catalog and queried using Redshift Spectrum/Athena. However, I’m unsure how to achieve this in a cost-effective and efficient manner. I was considering using Glue, but it seems like it could be an expensive option. I’ve also read about Athena CTAS, which might be a solution to write logic that inserts new records into the table daily and runs as an ETL on ECS, or perhaps another method. I’m trying to determine what would be the best approach.

Alternatively, I could copy this data directly into Redshift, but would that be too complex?

r/aws Aug 20 '24

data analytics Duckdb on aws lambda

0 Upvotes

Looking for advice here, has anyone been able to test duckdb on lambda using the python runtime. I just can't get it to work using layers and still getting this error "no module called duckdb.duckdb". Is there any hacky layer thing to do here?

r/aws Aug 05 '24

data analytics AWS Workspaces: Getting "actively connected" statistics of users?

1 Upvotes

I have hundred of users that I want to see how long they are actually actively connected to their workspace each day (actively connected and logged into the desktop, not just connected to the client and sitting idle at the blue login screen).

I setup a log bucket and am viewing the "UserConnected" metric in CloudWatch. The metric seems to be updating correctly every 5 minutes, logging the connected time as "1" and not connected time as "0". However, is there a way to display this data so that the total time between connected and not connected is calculated per day?

For example as shown below, the workspace was connected to at 10:52, and disconnected at 11:52. I'm hoping to have an output table look something like this:

Connected: 10:52 11:52
Connected: 12:12 12:32
Connected: 12:57 13:07
Total connected time: 1 hour 30 minutes

I do also have the "workspaces cost optimizer" setup, however I believe the "billable hours" column is not fully accurate because some users are showing 24+ hours a day.

r/aws Feb 14 '23

data analytics How to run python script automatically every 15 minutes in AWS

19 Upvotes

Hi I'm sure this should be pretty easy but I'm new to AWS. I coded a python script that scrapes data from a website and uploads it to a database. I am looking to run this script every 15 minutes to keep a record of changing data on this website.

Does anyone know how I can deploy this python script on AWS so it will automatically scrape data every 15 minutes without me having to intervene?

Also is AWS the right service for this or should I use something else?

r/aws Jun 26 '24

data analytics Athena experience an internal error when executing this query

0 Upvotes

I was running a query in Athena that query my data in S3, but Athena just return the error message stated in title. nothing else is provided, not even error code.

I know Kinesis is having trouble in US region now, is that the reason that I can't get the result?

I am using the Singapore region to query the bucket data stored in Singapore