this post was submitted on 01 Jul 2023
1 points (100.0% liked)
Data Engineering
105 readers
2 users here now
This is a place to discuss anything related to Data Engineering!
Self-promotion
You are welcome to share links to your own content as long as it is free, relevant & informational.
If you're posting your own content, you should be engaging with the discussion, not just farming clicks.
Do not share links to your own paid content. This is not a place to advertise your latest "influencer" course.
Vendor content
You are welcome to share links to your company's content as long as it is free, relevant & informational.
If you're posting your company's content, you should be engaging with the discussion, not just farming clicks.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This was announced by Databricks at their Data & AI summit.
It's interesting to see these older data warehouses starting to implement features that newer projects have proven over the past few years.
This method of incremental materialisation for streaming ingestion is the default in ClickHouse, and there's all sorts of so-called "streaming databases" that have popped up that exclusively do this e.g. Materialize, Rising Wave, etc.
I don't know what Databricks' implementation is like, I'm not a customer of theirs, so I'm interesting to see how successful it is. I have been using ClickHouse for some time now and it's a super powerful feature to get away from batch & schedules.