remix logo

Hacker Remix

Apache DataFusion

154 points by thebuilderjr 6 days ago | 47 comments

kristjansson 3 days ago

Of interest and relevance: This past semester, Andy Pavlo's DB seminar at CMU explored a number of projects under the heading 'Database Building Blocks', starting with DataFusion and several of its applications. Take a listen!

https://www.youtube.com/playlist?list=PLSE8ODhjZXjZc2AdXq_Lc...

GardenLetter27 3 days ago

There's also Andrew Lamb's series - https://www.youtube.com/watch?v=NVKujPxwSBA

PartiallyTyped 3 days ago

The lectures / seminars from CMU's DB lab produces are always a treat!

jamesblonde 3 days ago

There is a cambrian explosion in data processing engines: DataFusion, Polars, DuckDB, Feldera, Pathway, and more than i can remember.

It reminds of 15 years ago where there was JDBC/ODBC for data. Then when data volumes increased, specialized databases became viable - graph, document, json, key-value, etc.

I don't see SQL and Spark hammers keeping their ETL monopolies for much longer.

jitl 3 days ago

Spark for sure I view with suspicion and avoid as much as possible at work.

SQL though is going the distance. like Feldera is SQL based stream processing and uses DataFusion under the hood for some data wrangling. DuckDB is also very SQL.

I have my quibbles with SQL as a language but I would prefer SQL embedded in $myLanguage to needing to use Python or (shudder) Scala to screw around with data.

hipadev23 3 days ago

Absolutely agree. Spark is the same garbage as Hadoop but in-memory.

ignoreusernames 3 days ago

just out of curiosity, why do you say that spark is "in-memory"? I see a lot people claiming that, including several that I've interviewed in the past few years but that's not very accurate(at least in the default case). Spark SQL execution uses a bog standard volcano-ish iterator model (with a pretty shitty codegen operator merging part) built on top of their RDD engine. The exchange (shuffle) is disk based by default (both for sql queries and lower level RDD code), unless you mount the shuffle directory in a ramdisk I would say that spark is disk based. You can try it out on spark shell:

  spark.sql("SELECT explode(sequence(0, 10000))").write.parquet("sample_data")
  spark.read.parquet("sample_data").groupBy($"col").count().count()
after running the code, you should see a /tmp/blockmgr-{uuid} directory that holds the exchange data.

bdndndndbve 3 days ago

Spark is "in-memory" in the sense that it isn't forced to spill results to disk between operations, which used to be a point of comparison to MapReduce specifically. Not ground-breaking nowadays but when I was doing this stuff 10+ years ago we didn't have all the open-source horizontally scalable SQL databases you get now - Oracle could do it and RedShift was new hotness.

ignoreusernames 3 days ago

> Spark is "in-memory" in the sense that it isn't forced to spill results to disk between operations

I see your point, but that's only true within a single stage. Any operator that requires partitioning (groupBys and joins for example) requires writing to disk

> [...] which used to be a point of comparison to MapReduce specifically.

So each mapper in hadoop wrote partial results to disk? LOL this was way worse than I remember than. It's been a long time that I've dealt with hadoop

> Not ground-breaking nowadays but when I was doing this stuff 10+ years

I would say that it wouldn't be ground breaking 20 years ago. I feel like hadoop influence held up our entire field for years. Most of the stuff that arrow made mainstream and is being used by a bunch of engines mentioned in this thread has been known for a long time. It's like, as a community, we had blindfolds on. Sorry about the rant, but I'm glad the hadoop fog is finally dissipating

hipadev23 2 days ago

Because that was the central point in the original whitepaper [1]: Hadoop is slow because it’s disk-only where Spark uses memory and caching to speed things up. I understand Spark isn’t 100% in-memory the way say Redis is, but it was still the major selling point vs. Hadoop.

https://people.csail.mit.edu/matei/papers/2010/hotcloud_spar...

62951413 2 days ago

Apache DataFusion Comet is a high-performance accelerator for Apache Spark, built on top of the powerful Apache DataFusion query engine: https://datafusion.apache.org/comet/user-guide/overview.html

pjmlp 3 days ago

As someone that confortably ignored NoSQL hype, I am not worried.

francocalvo 3 days ago

I don't think SQL is going anyware. There might me abstactions that use these engines but you write SQL (a là dbt) before people get used to 10 APIs for the same.

What Spark has going for it is its ecosystem. Things like Delta and Iceberg are being written for Spark first. Look at PyIceberg for example

krapht 6 days ago

I feel like I'm not the target audience for this. When I have large data, then I directly write SQL queries and run them against the database. It's impossible to improve performance when you have to go out to the DB anyway; might as well have it run the query too. Certainly the server ops and db admins have loads more money to spend on making the DB fast compared with my anti-virus laden corporate laptop.

When I have small data that fits on my laptop, Pandas is good enough.

Maybe 10% of the time I have stuff that's annoyingly slow to run with Pandas; then I might choose a different library, but needing this is rare. Even then, of that 10% you can solve 9% of that by dropping down to numpy and picking a better algorithm...

jitl 3 days ago

Your large db doesn’t sound very large. If I want to run a query that requires visiting every row of my biggest table, I will need to run the query a total of 480 times across 96 different Postgres databases. Just `select id from block` will take days to weeks.

But, I can visit most rows in that dataset in about 4 hours if I use an OLAP data warehouse thing, the kind of thing you build on top of DataFusion.

threeseed 3 days ago

You’re right it isn’t for you.

It’s largely for companies who can’t put everything in a single database because (a) they don’t control the source schema e.g. it’s a daily export from a SaaS app, (b) the ROI is not high enough to do so and (c) it’s not in a relational format e.g. JSON, Logs, Telemetry etc.

And with the trend toward SaaS apps it’s a situation that is becoming more common.

GardenLetter27 3 days ago

Or when the data is massive - so even BigQuery would be crazy expensive.

thebuilderjr 6 days ago

I agree. The main reason I shared it is because I find it interesting as a library. I actually use it behind the scenes to build https://telemetry.sh. Essentially, I ingest JSON, infer a Parquet schema, store the data in S3 with a lookaside cache on disk, and then use DataFusion for querying.

Hugsun 3 days ago

How do you infer your Parquet schemas?

threeseed 3 days ago

You infer the types of the source data.

For example you can go through say 1% of your data and for each column see if you can coerce all of the values to a float, int, date, string etc. And then from there you can set the Parquet schema with proper types.

RobinL 3 days ago

> It's impossible to improve performance when you have to go out to the DB anyway;

That's not right. There are many queries that run far faster in duckdb/datafusion than (say) postgres, even with the overhead of pulling whole large tables prior to running the query. (Or use like pg_duckdb).

For certain types of queries these engines can be 100x faster.

More here: https://postgres.fm/episodes/pg_duckdb

netcraft 3 days ago

Why would this be useful over of DuckDb? (earnest question)

chatmasta 3 days ago

They’re similar, but DuckDb is more of a batteries-included database whereas DataFusion is an embeddable query engine. You can use DuckDb in embedded-ish scenarios, but it’s not primarily targeting that use case. To put it another way, DataFusion is sometimes described as “the LLVM of databases.”

Another difference is that DuckDb is written in C++ whereas DataFusion is in Rust, so all the usual memory-safety and performance arguments apply. In fact DataFusion has recently overtaken DuckDb in Clickbench results after a community push last year to optimize its performance.

jitl 3 days ago

We tried both about 8 months ago, at the time DuckDB’s Node driver leaked memory and segfaulted, DataFusion was missing some features we wanted. But they are both improving rapidly.

geysersam 3 days ago

> DataFusion has recently overtaken DuckDb in Clickbench results after a community push last year

Really? I don't see it near the top.

[CH benchmarks](https://benchmark.clickhouse.com/#eyjzexn0zw0ionsiqwxsb3leqi...)

alamb 2 days ago

Specifically, DataFusion is faster when querying parquet directly.

Most of the leaderboard of ClickBench is for database specific file formats (that you first have to load the data into)

kalendos 3 days ago

You might need to adjust filters to do an apple to apple comparison.

https://benchmark.clickhouse.com/#eyJzeXN0ZW0iOnsiQWxsb3lEQi...

riku_iki 2 days ago

Not clear why someone need to give up on native duckdb format if it is much faster.

chatmasta 2 days ago

Because it means you need to keep another copy of your data in a special format just for DuckDb. The point of Parquet is that it’s an open format queryable by multiple tools. You don’t need to wait to load every table into a new format, you don’t need to retain multiple copies, and you don’t need to keep them in sync.

If DuckDb is the only query engine in your analytics stack, then it makes sense to use its specialized format. But that’s not the typical Lakehouse use case.

riku_iki 2 days ago

> But that’s not the typical Lakehouse use case.

that benchmark is also not typical lakehouse use case, since all data is hosted locally, so they don't test significant component of the stack.

chatmasta 2 days ago

Yeah, that’s one of many issues with Clickbench. It’s also one table so it can’t test joins.

TPC-H is okay but not Lakehouse specific. I’m not aware of any benchmarks that specifically test performance of engines under common setups like external storage or scalable compute. It would be hard to design one that’s easily reproducible. (And in fairness to Clickbench, it’s intentionally simple for that exact reason - to generate a baseline score for any query engine that can query tabular data).

alamb 2 days ago

I think you would pick DataFusion over DuckDB if you want to customize it substantially. Not just with user defined functions (which are quite easy to write in DataFusion and are very fast), but things like * custom file formats (e.g. Spiral or Lance) * custom query languages / sql dialects * custom catalogs (e.g. other than a local file or prebuilt duckdb connectors) * custom indexes (read only parts of parquet files based on extra information you store) * etc.

If you are looking for the nicest "run SQL on local files" experience, DuckDB is pretty hard to beat

Disclaimer: I am the PMC chair of DataFusion

There are some other interesting FAQs here too: https://datafusion.apache.org/user-guide/faq.html