thank you, Adam. An amazing, short but very valuable video. Please continue enlighting us more. Also, I feel that this way of presenting is much better than not seeing the tutor, just the whiteboard. It makes me feel engaged.
Hello, Adam! I absolutely love and admire your effort (and other Confluent speakers) in making these very complex topics so easy to understand and grasp. Absolute best out there. Laconic an informative. Big thanks!
Kapil here, more of a .NET development exp with zero Kafka use in past. I learned 10% from video and another 90% from the comments, wow thanks Adam. I now have to checkout what else you and your team have created for me to learn. Hope to learn and use it in near future.
Very well explained and the diagrams helped a lot. Great pacing, I didn't get lost in words and didn't feel like I need to play on 1.5x speed like a lot of videos. I liked the lecture style of this vs many 'content creators' that have visually beautiful videos with animations and graphics that in the end distract from the topic. Great job!
The request/response can be build with a full consistency. I suppose that craft , mainly transactional management is forgotten or limitation of service based architecture.
It also helps that everyone is left handed. 😉 Actually, I never noticed until this video that when you walk behind the text, you block the black background, which makes it impossible to read the text. Well, I guess that’s only a problem if you’re white. 🤔 I guess there are some advantages to being black after all… Does this mean that light boards are racist? Well, in any case, great video. 👍
Hi, Adam. Thank you for the great explanation, but there's another important part missing : the cost. Could you please go over that for the following videos?
Do you keep the entire history of events (1 create, N update, maybe 1 delete event) for each and every document/object/… in those topics? How does that affect storage/performance over time? Or is there some way to compress/discard past events, say f.e. by regularly creating snapshots of the state?
Also, keep in mind that even in a traditional system where you only store the current state of each entity in a database, you’re almost always going to want to store a historical log of changes to those entities somewhere. So the way that I look at it, you’re probably going to end up storing both the current state and the historical changes either way. Event driven design just stores the changes first.
Well done. The only thing I can think of, which I can't recall hearing in the video, is that event driven architectures are not completely decoupled, just more decoupled than request/responses architectures. They are coupled on the data exchange contract, for example.
In the EDA, 1)How do we verify the transaction got verified? (In an event where subscribers lose an event) 2) In a micro services scenario where multiple nodes are running, how do we prevent duplicates? (how to stop Processing the same order twice)
Well cool everybody says that with APIs if either of them fails then all system fail but no one says what will happen if queue service fails which will lead to exactly same issues + synchronization afterwards
Adam here - If the event broker fails, then the producers are responsible for buffering their output events until the broker comes back up. Similarly, you could buffer and backlog the API requests, but if typically those tend to time out after a certain period of time (eg 5s, 10s) and return an error. Keep in mind that event brokers are typically designed to run in a cluster, where you would need to have multiple outages to bring it all down. If you need to build for very high fault tolerance, then you can of course cache the events to write to the event broker on disk - adds complexity, but then your application could crash too - even more tolerance. In the end it all becomes a matter of system design, requirements vs. nice-to-haves, and how much money you're willing to spend. But FWIW, it's a lot easier to make one component highly available and fault tolerant (eg: the broker) and allow other services to choose how much tolerance they need to have for the events they're writing. Some, like real-time metrics, may not care if the broker goes down and they'll just throw the data away. Others, like financial transactions, will need contingencies to ensure that they can resume when the broker comes back up.
@@adambellemare3265 true, but regarding last point - you can also build a monolithic main API which can scale horizontally and have "one component highly available" to other smaller systems. I'd say the difference in usage is not fail tolerance but requirements to your system and whether you can implement it in 1 way or another
@@a_ryota Yep. Having worked at Shopify, they chose to make a very large distributed monolith with an API to scale horizontally for a large portion of their operations. However, they still ended up going an event-driven route with Kafka for many other use cases that weren't well supported - powering the analytics stack, ML, AI, reporting, and dashboard provisioning. Additionally, tasks like computing sales, receivables, taxation, invoices, inventory, shipping, etc, that didn't need to provide the transactional capabilities of the MySQL database, and that could be accomplished outside of the monolith. Event-driven decoupling absolves the main monolith API from having to do the work on behalf of the calling clients - instead, it provides just the data, and lets them choose the tech stack that works best for their own storage, computation, management, etc. But at the end of the day, if you can do everything you need inside a monolith and don't need the data anywhere else, then don't bother adding complexity where none is merited.
Amazing video! Question about your setup, have you been teaching yourself to write backwards? My mind doesn’t quite wrap around how this video is filmed, it looks like the transparent “whiteboard” is in front, with you behind it writing.
I'd call the structure of the talk more as a promo, rather than a try to identify the difference. If you start thinking at some point, can I do the same things in req/resp? The answer is often yes, with no or little effort. I've heard somewhere that the answer of what to use depends on whether answer from the service is needed or not. If yes -> req/resp, otherwise -> EDA can be used. But this definition is quite vague sadly, and, as always, depends on the case. Anyway, a good talk, thanks.
I prefer an orchestrating ordering process that triggers events for underlying services to act on. These services will obtain the necessary data by making API calls to other services. Highly flexible through the use of process driven approach. Decoupled through event driven services. Consistency through well defined APIs.
@@ConfluentDeveloperRelations my orchestrator can replay itself based on history and each event driven service can scale up horizontally to consume more work.
Why would you orchestrate some business process based on events (I assume), if services still making sync calls to other API's? That feels like orchestrated choreograhy. Data and temporal coupling are still there. Could you please explain underlying reason to do so?
There are sometimes cases where the full data a service needs to process an event would be too large for e.g. a Kafka message. In that special case the service could obtain additional data via a synchronous call to another API. If the data provided by the API are immutable then replayability won't be lost.
@@kohlimhg Yep, we also call it "claim cheque/check" pattern. The complexity with this pattern is stitching together the permissions and access controls between the Kafka record and the system that you present the claim check to. One good trick is to put all the work in the serializer and deserializer, such that it's transparent to the consumer.
Hi Adam, On Reactivity, I understand the difference in Async vs Req/response, but what is the conclusion and difference in reactivity between the two architectures. This was not clear.
My question is about his setup, the board he is using to write in front of the camera is an Acrylic board correct? can anyone correct me if am wrong here? and what is the best marker to use I do deliver online training and consultation and I want to use the same method
What about the challenges of distributed transactions and error handling on both of this multiservice architecture. In real scenario this is a key process. When you have an order and there is stock, there is a set of state changes on the cart of the user and in the state of the order but the payment service that got a new event in a topic just fails becouse there were an issue in the user bank. I would like you to have a video in this topics. As well as how to scale both arch
I am a student Software engineering and I found this quite interesting. Is there any academic/research paper out there that discuss this topic in details you could point out peharps. Thank you.
@lamintouray7333 Since you’re studying SWE, you should read about this in indirect communication under distributed systems. That’s where most of the foundational knowledge is.
Hi Adam, I wanted to check what all are the ways to get the completion status in EDA from fulfilment store? I can think of polling only. Which I believe shouldn't be recommended. Could you suggest the best approach.
If the message broker holding the topics goes down, then so does the entire system. Also, individual services are still transitively coupled to each other.
@@ConfluentDeveloperRelations To the first point, wouldn’t you expect then that individual services to have redundancy as well? The point is kind of moot if both architectures are resilient in that way. Also, there is still temporal coupling in an EDA, it’s just transitively. For example, the Invoice service can’t execute unless the same order of sequences occur, similar to a REST architecture.
Your storefront probably should NOT be rewriting order changes that have reached complete. They should create a modification record. The view of the order will be a merged view of the original record and all modified records. In a document database, this is one collection showing the "current" order representing the merge and a table of changes over time. The changes can be differences but it also could just be the complete order as a second record with a version. In this way, the storefront can always provide order history without needing to pull it from external sources.
I don't understand the need for Kafka here, the storefront could easily keep a history of order data to use for inventory, data lakes, ML etc. etc.? Also, depending on how the request-response's model's architecture is planned, it could work with "events" as well. Just don't design it to require an immediate response, but rather poll for a list of the fulfilled orders regularly and update its "order status" attribute. If there is a clear boundary between req-res and EDA, I still can't see it. All depends on how it's implemented, right? In the EDA example, the storefront would at some point need to display the fulfilled order, so it still needs to consume "responses" from the fulfillment, just it's asking Kafka instead the fulfillment service. You still need to define a data structure for the event and hope all your future application will be able to consume it, it's still a hard contract. Isn't it true you could create an asynchronous req-res Application? The immediate need for a response seems contrived and a beginner's mistake, frankly.
I find this terminology problematic. Especially with cloud architecture, where traditional servers are abstract. Everything is using events and resquest-responses.
Sorry, just the introduction was enough. EDA is _not_ about replacing request-response with events, because then you just introduce asynchronous messaging. EDA is about how you tell others about change. Yes, that will be asynchronous, but the same system can still use (syncronous or asynchronous) request-response. Responsive querying uses no events either.
Ugh… don’t get me started! I’m in the US, and the one word I just can’t let go is “cancelled” vs “canceled”. That word should have two Ls, period. This is where I draw the line. I’m willing to die on this particular hill if I have to. 😏
Man drawing boxes around single point of failure (kafka) and saying it's loosely coupled. What's next ? Cloud as decentralized service ? I think I just watched a very long product advertisement. Better go learn some actor model and read about Carl Hewitt work instead of watching this brain wash.
@ConfluentDevXTeam Sorry for being harsh. I first watched video got frustrated wrote comment and then looked at the author and I thought to remove this comment but I left it because it have a point. So I want to explain myself. I don't like video because it is very chaotic and says nothing about impact of queue message sizes, message consistency for single topic but only complains about API consistency and presents RPC like technology from 90s. Watching this video I feel like I got back in time and servers are still using thread pool instead of event loop, everything is synchronous and it's using WSDL and SOAP and queue is the answer to all the problems, whether it's not. The presentation of queue adventages are very chaotic. Especially if you're presenting queue so unique like kafka that have message retention and message order consistency. Because people can ask themselves why not just use ZMQ or any other MQ or NATS or just use websocket and graphql because author says REST is obsolete. For me the presentation should start with DAG single node RPC and two edges with same message and then author should say that if you want this messages be processed multiple times or if this message needs to be reprocessed (draw the edge going back to same node) ? We have an answer for it - kafka - the queue with retention and guaranteed order. You don't have to mangle your RPC business logic anymore. Don't worry about performance because kafka is battle tested by linkedin where it was developed in the first place. On the other hand if you need a sequence of things happening with single message and you have performance problems, if your messages are very big maybe it's better to use for example serverless soultions or maybe DAG processing frameworks like Airflow because if you put everything into queue or everything in RPC you end up with the same problems but in different environment. It should be clearly stated that data design and understanding of data flow is more important than underlying architecture and business logic. Because everything is just a wrapper around data. If you don't understand where is your data coming from and where it's going don't pick a solution.
I know all the smart people use kafka or other systems like sns, sqs etc. But something alwaus bothered me. Why you can not use a database to do that? a well tune postrgress in RDS or dynamo? I mean to store events and lets consumers and producers read/write from that db? why kafka and all these systems are preferred?
@@ConfluentDeveloperRelations Adam thanks for taking the time to give such an in depth response. BTW i am not saying kafka is bad idea. In my company I am one of the person that advocates a lot to use EDA with aws service such sns, sqs, eventbridge. But sometimes i question myself if we are taking all the juice from these tools or if those tools under heavy conditions works better than a good customized solution like a DB in RDS. At the end of the day brokers have a persistence layer to store simple messages. Also i guess this comes down to the people you have, if they have experience with these modern tools, or they are more aligned to classic DBMS systems. And also the money you have to spend on a solution, and how big will become in the future. Thanks again.
I also prefer databases whenever possible, but the question is, what happens when the database goes down? Event queues have their advantages, especially when money is involved.
thank you, Adam. An amazing, short but very valuable video. Please continue enlighting us more. Also, I feel that this way of presenting is much better than not seeing the tutor, just the whiteboard. It makes me feel engaged.
I agree, I like seeing someone talking to me and drawing as they are explaining, it's much more engaging.
Hello, Adam!
I absolutely love and admire your effort (and other Confluent speakers) in making these very complex topics so easy to understand and grasp.
Absolute best out there. Laconic an informative. Big thanks!
Adam here. Thanks! I appreciate the kind words.
Kapil here, more of a .NET development exp with zero Kafka use in past. I learned 10% from video and another 90% from the comments, wow thanks Adam. I now have to checkout what else you and your team have created for me to learn. Hope to learn and use it in near future.
I consider myself lucky to have seen both setups in action. One thing l have noticed is that event drive architecture is flexible for scaling
Very well explained and the diagrams helped a lot. Great pacing, I didn't get lost in words and didn't feel like I need to play on 1.5x speed like a lot of videos. I liked the lecture style of this vs many 'content creators' that have visually beautiful videos with animations and graphics that in the end distract from the topic. Great job!
The request/response can be build with a full consistency. I suppose that craft , mainly transactional management is forgotten or limitation of service based architecture.
*thank you sir. A veteran IT person learned a lot.❤*
I was honestly wondering how you learned how to write backwards so effectively until I realized you just flipped the video...
It also helps that everyone is left handed. 😉
Actually, I never noticed until this video that when you walk behind the text, you block the black background, which makes it impossible to read the text. Well, I guess that’s only a problem if you’re white. 🤔
I guess there are some advantages to being black after all…
Does this mean that light boards are racist?
Well, in any case, great video. 👍
Adam! great and clear explanation of trade offs! keep it up! - Adam
Hi, Adam. Thank you for the great explanation, but there's another important part missing : the cost. Could you please go over that for the following videos?
Do you keep the entire history of events (1 create, N update, maybe 1 delete event) for each and every document/object/… in those topics? How does that affect storage/performance over time? Or is there some way to compress/discard past events, say f.e. by regularly creating snapshots of the state?
Also, keep in mind that even in a traditional system where you only store the current state of each entity in a database, you’re almost always going to want to store a historical log of changes to those entities somewhere. So the way that I look at it, you’re probably going to end up storing both the current state and the historical changes either way. Event driven design just stores the changes first.
Well done. The only thing I can think of, which I can't recall hearing in the video, is that event driven architectures are not completely decoupled, just more decoupled than request/responses architectures. They are coupled on the data exchange contract, for example.
Adam drops another knowledge bomb! Respect
Nice video. I never thought that general REST API Request-Response systems were different from EDA Microservices.
How would you pick one over the other? What are the use cases.
@@ConfluentDeveloperRelations thanks for the explanation!
Great video Adam, thanks!
Amazing explanation! Thanks for this 👍😃
Thanks, it really helpful
Great video! quick question: What would happen if the Kafka service is down in the EDA model? Or How robust is the Kafka service?
What is the device we can write like that?.. is it just a cam on or
In the EDA,
1)How do we verify the transaction got verified? (In an event where subscribers lose an event)
2) In a micro services scenario where multiple nodes are running, how do we prevent duplicates? (how to stop Processing the same order twice)
Thank you for the clear explanation
What if delivery semantics is "At most once"? How consistency can be reached?
Well cool everybody says that with APIs if either of them fails then all system fail but no one says what will happen if queue service fails which will lead to exactly same issues + synchronization afterwards
Adam here - If the event broker fails, then the producers are responsible for buffering their output events until the broker comes back up. Similarly, you could buffer and backlog the API requests, but if typically those tend to time out after a certain period of time (eg 5s, 10s) and return an error. Keep in mind that event brokers are typically designed to run in a cluster, where you would need to have multiple outages to bring it all down.
If you need to build for very high fault tolerance, then you can of course cache the events to write to the event broker on disk - adds complexity, but then your application could crash too - even more tolerance.
In the end it all becomes a matter of system design, requirements vs. nice-to-haves, and how much money you're willing to spend. But FWIW, it's a lot easier to make one component highly available and fault tolerant (eg: the broker) and allow other services to choose how much tolerance they need to have for the events they're writing. Some, like real-time metrics, may not care if the broker goes down and they'll just throw the data away. Others, like financial transactions, will need contingencies to ensure that they can resume when the broker comes back up.
@@adambellemare3265 true, but regarding last point - you can also build a monolithic main API which can scale horizontally and have "one component highly available" to other smaller systems.
I'd say the difference in usage is not fail tolerance but requirements to your system and whether you can implement it in 1 way or another
@@a_ryota Yep. Having worked at Shopify, they chose to make a very large distributed monolith with an API to scale horizontally for a large portion of their operations. However, they still ended up going an event-driven route with Kafka for many other use cases that weren't well supported - powering the analytics stack, ML, AI, reporting, and dashboard provisioning. Additionally, tasks like computing sales, receivables, taxation, invoices, inventory, shipping, etc, that didn't need to provide the transactional capabilities of the MySQL database, and that could be accomplished outside of the monolith.
Event-driven decoupling absolves the main monolith API from having to do the work on behalf of the calling clients - instead, it provides just the data, and lets them choose the tech stack that works best for their own storage, computation, management, etc. But at the end of the day, if you can do everything you need inside a monolith and don't need the data anywhere else, then don't bother adding complexity where none is merited.
Amazing video! Question about your setup, have you been teaching yourself to write backwards? My mind doesn’t quite wrap around how this video is filmed, it looks like the transparent “whiteboard” is in front, with you behind it writing.
I'd call the structure of the talk more as a promo, rather than a try to identify the difference.
If you start thinking at some point, can I do the same things in req/resp? The answer is often yes, with no or little effort.
I've heard somewhere that the answer of what to use depends on whether answer from the service is needed or not. If yes -> req/resp, otherwise -> EDA can be used. But this definition is quite vague sadly, and, as always, depends on the case.
Anyway, a good talk, thanks.
I prefer an orchestrating ordering process that triggers events for underlying services to act on. These services will obtain the necessary data by making API calls to other services. Highly flexible through the use of process driven approach. Decoupled through event driven services. Consistency through well defined APIs.
@@ConfluentDeveloperRelations my orchestrator can replay itself based on history and each event driven service can scale up horizontally to consume more work.
Why would you orchestrate some business process based on events (I assume), if services still making sync calls to other API's?
That feels like orchestrated choreograhy. Data and temporal coupling are still there.
Could you please explain underlying reason to do so?
There are sometimes cases where the full data a service needs to process an event would be too large for e.g. a Kafka message. In that special case the service could obtain additional data via a synchronous call to another API. If the data provided by the API are immutable then replayability won't be lost.
@@kohlimhg Yep, we also call it "claim cheque/check" pattern. The complexity with this pattern is stitching together the permissions and access controls between the Kafka record and the system that you present the claim check to. One good trick is to put all the work in the serializer and deserializer, such that it's transparent to the consumer.
Absulutely amazing explanation!! Could you make a video on Kafka, why it is so fast and relieable?
Would there be a kafka topic between the mobile users frontend and the storefront DB? or are we doing RR to store users input into the storefront DB?
Hi Adam, On Reactivity, I understand the difference in Async vs Req/response, but what is the conclusion and difference in reactivity between the two architectures. This was not clear.
My question is about his setup, the board he is using to write in front of the camera is an Acrylic board correct? can anyone correct me if am wrong here? and what is the best marker to use
I do deliver online training and consultation and I want to use the same method
What about the challenges of distributed transactions and error handling on both of this multiservice architecture. In real scenario this is a key process. When you have an order and there is stock, there is a set of state changes on the cart of the user and in the state of the order but the payment service that got a new event in a topic just fails becouse there were an issue in the user bank. I would like you to have a video in this topics. As well as how to scale both arch
I am a student Software engineering and I found this quite interesting. Is there any academic/research paper out there that discuss this topic in details you could point out peharps. Thank you.
A paper, for kafka, kekw
@lamintouray7333 Since you’re studying SWE, you should read about this in indirect communication under distributed systems. That’s where most of the foundational knowledge is.
Hi Adam, I wanted to check what all are the ways to get the completion status in EDA from fulfilment store? I can think of polling only. Which I believe shouldn't be recommended. Could you suggest the best approach.
For EDA, do we need to use CDC technology for it?
Just awesome!
If the message broker holding the topics goes down, then so does the entire system. Also, individual services are still transitively coupled to each other.
@@ConfluentDeveloperRelations To the first point, wouldn’t you expect then that individual services to have redundancy as well? The point is kind of moot if both architectures are resilient in that way.
Also, there is still temporal coupling in an EDA, it’s just transitively. For example, the Invoice service can’t execute unless the same order of sequences occur, similar to a REST architecture.
Thank you
Your storefront probably should NOT be rewriting order changes that have reached complete. They should create a modification record. The view of the order will be a merged view of the original record and all modified records. In a document database, this is one collection showing the "current" order representing the merge and a table of changes over time. The changes can be differences but it also could just be the complete order as a second record with a version. In this way, the storefront can always provide order history without needing to pull it from external sources.
I don't understand the need for Kafka here, the storefront could easily keep a history of order data to use for inventory, data lakes, ML etc. etc.? Also, depending on how the request-response's model's architecture is planned, it could work with "events" as well. Just don't design it to require an immediate response, but rather poll for a list of the fulfilled orders regularly and update its "order status" attribute.
If there is a clear boundary between req-res and EDA, I still can't see it. All depends on how it's implemented, right?
In the EDA example, the storefront would at some point need to display the fulfilled order, so it still needs to consume "responses" from the fulfillment, just it's asking Kafka instead the fulfillment service. You still need to define a data structure for the event and hope all your future application will be able to consume it, it's still a hard contract.
Isn't it true you could create an asynchronous req-res Application? The immediate need for a response seems contrived and a beginner's mistake, frankly.
I feel like he's a bit biased
I find this terminology problematic. Especially with cloud architecture, where traditional servers are abstract. Everything is using events and resquest-responses.
Sorry, just the introduction was enough. EDA is _not_ about replacing request-response with events, because then you just introduce asynchronous messaging. EDA is about how you tell others about change. Yes, that will be asynchronous, but the same system can still use (syncronous or asynchronous) request-response. Responsive querying uses no events either.
Its so easy to talk about EDA vs req/res without talking about user feedback 👎
Fulfilment is spelled without two Ls
Ugh… don’t get me started!
I’m in the US, and the one word I just can’t let go is “cancelled” vs “canceled”. That word should have two Ls, period. This is where I draw the line. I’m willing to die on this particular hill if I have to. 😏
Man drawing boxes around single point of failure (kafka) and saying it's loosely coupled. What's next ? Cloud as decentralized service ? I think I just watched a very long product advertisement. Better go learn some actor model and read about Carl Hewitt work instead of watching this brain wash.
@ConfluentDevXTeam Sorry for being harsh. I first watched video got frustrated wrote comment and then looked at the author and I thought to remove this comment but I left it because it have a point. So I want to explain myself.
I don't like video because it is very chaotic and says nothing about impact of queue message sizes, message consistency for single topic but only complains about API consistency and presents RPC like technology from 90s.
Watching this video I feel like I got back in time and servers are still using thread pool instead of event loop, everything is synchronous and it's using WSDL and SOAP and queue is the answer to all the problems, whether it's not.
The presentation of queue adventages are very chaotic. Especially if you're presenting queue so unique like kafka that have message retention and message order consistency. Because people can ask themselves why not just use ZMQ or any other MQ or NATS or just use websocket and graphql because author says REST is obsolete.
For me the presentation should start with DAG single node RPC and two edges with same message and then author should say that if you want this messages be processed multiple times or if this message needs to be reprocessed (draw the edge going back to same node) ? We have an answer for it - kafka - the queue with retention and guaranteed order. You don't have to mangle your RPC business logic anymore. Don't worry about performance because kafka is battle tested by linkedin where it was developed in the first place.
On the other hand if you need a sequence of things happening with single message and you have performance problems, if your messages are very big maybe it's better to use for example serverless soultions or maybe DAG processing frameworks like Airflow because if you put everything into queue or everything in RPC you end up with the same problems but in different environment.
It should be clearly stated that data design and understanding of data flow is more important than underlying architecture and business logic.
Because everything is just a wrapper around data. If you don't understand where is your data coming from and where it's going don't pick a solution.
Event driven architecture is a headache for developers, it has a lot of pitfalls and i recommend never do it
I know all the smart people use kafka or other systems like sns, sqs etc. But something alwaus bothered me. Why you can not use a database to do that? a well tune postrgress in RDS or dynamo? I mean to store events and lets consumers and producers read/write from that db? why kafka and all these systems are preferred?
@@ConfluentDeveloperRelations Adam thanks for taking the time to give such an in depth response. BTW i am not saying kafka is bad idea. In my company I am one of the person that advocates a lot to use EDA with aws service such sns, sqs, eventbridge. But sometimes i question myself if we are taking all the juice from these tools or if those tools under heavy conditions works better than a good customized solution like a DB in RDS. At the end of the day brokers have a persistence layer to store simple messages. Also i guess this comes down to the people you have, if they have experience with these modern tools, or they are more aligned to classic DBMS systems. And also the money you have to spend on a solution, and how big will become in the future. Thanks again.
I also prefer databases whenever possible, but the question is, what happens when the database goes down?
Event queues have their advantages, especially when money is involved.