Have a look at how we at Cashfree Payments achieve Concurrency and Parallelism in this blog by our tech team!
Back in college, we were given 8086 microprocessors to code in assembly language. The teacher would ask us to write simple programs on it. These programs would have instructions which were executed sequentially. Computer software was written conventionally for serial computing. However, languages and hardware have evolved from the 1970s and now developers have more access to horizontal and vertical scaling.
As a result, developers have turned to concurrent and parallel computing.
Concurrency means independent and interruptible processes that are dealing with a lot of things at once over shared resources and data. The outcome of these events after their execution should be the same as if they were run sequentially. Concurrency should not be confused with parallelism. Parallelism is doing a lot of things at once but independently. A concurrent program does not necessarily mean it will run at the same time. It simply means that a program can execute different jobs independent of each other.
Serial vs Concurrent vs Parallel for a Web Server
- Serial: This is one instance of a server running. The server processes one request at a time and blocks other requests till it is complete. The server is blocked by that one request.
- Concurrent: This is one instance of a server running, but in this case it can take in multiple requests.
- Parallel Only: There are multiple instances of the server running and each server processes serially
- Concurrent + Parallel: Multiple servers and each server can process concurrent requests
The ultimate impact of the four different types of flows can be best explained in the following illustration.
A Real World Analogy: Concurrency vs Parallelism
Bob has entered into a cooking competition where he has to cook 4 dishes as soon as possible.
Each dish has 3 phases:
- Preparation takes 5 minutes
- Microwaving takes 20 minutes
- Presentation takes another 5 minutes.
- Total time for each dish = 30 minutes
Bob Tries the Serial Approach
Bob cooks one dish and then moves to the next one.
Total time taken is 30 minutes x 4 dishes = 120 minutes
Bob Tries the Concurrent Approach
Bob prepares Dish 1 and puts it in the microwave. Then he prepares Dish 2 and puts it in the microwave and so on. After Dish 4 is put in the microwave, Bob starts on Dish 1 where he has to wait for 5mins and then begin the presentation. He does the presentation for Dish 1, moves on to Dish 2 for presentation and so on.
Total time take = 4×5 + 5 + 4×5 = 45 minutes
Bob Tries the Parallel Approach
Bob cooks Dish 1 as well as Dish 2, and he asks his assistant to cook Dish 3 and 4 but they cook sequentially. Each of them needs 60 minutes to cook two dishes.
Total time = 60 minutes
Bob Tries the Parallel + Concurrent Approach
Bob and his assistant cook concurrently. This way, each of them completes 2 dishes 5 (Dish 1 Prep) + 5 (Dish 2 Prep) + 15 (Dish 1 Microwave) + 5 (Dish 1 Presentation) + 5 (Dish 2 Presentation).
Total time = 35 minutes
It is also important to note that concurrency is an aspect of the problem domain—your code needs to handle multiple simultaneous (or near simultaneous) events.
To achieve concurrency, languages give out different tools:
Forms of concurrency control
- Concurrent request in HTTP Servers
- Threads in Java
- Go routines
- Concurrent transactions in database
Concurrency helps in:
- More operations can be performed in a given time
- More hardware utilisation
- Operations can be performed quicker leading to a faster turnaround time
With Great Power Comes Great Responsibility
While there are obvious benefits of using concurrency, writing and debugging concurrent code is hard because the chances of things going wrong are very, very high as scale increases. Common problems include race conditions, atomicity, deadlocks and starvations. Languages like Go, Elixir have a few tricks up their sleeves in order to write more resilient concurrent code.
Since these operations are independent, they act on the same resource and they are never aware of what others are doing to that resource.
In case of a database transaction, this can be something like reading a value of a row and then adding something to and then storing it back. If two or more concurrent processes are doing this at a large scale, then there are high chances that the updated value will be the wrong value. If you are using mysql with InnoDB then an UPDATE will be your safe bet.
Other ways to deal with this are table or row locking (pessimistic locking) or you can let your application do it for you (optimistic locking) where the application rolls back the operation if the value has changed from what it read as.
Suppose your application does not deal with a database but you still want to have concurrency control, then there are other ways to do it:
- You can use a queue that takes in all the requests. You can keep the consumption of the queue sequential. It will keep your data consistent.
- You can enforce idempotency in your requests which will not throw an error if the same request comes again without the prior ones being completed
- You can enforce ETags in request headers to deal with concurrency in Rest API
- If your service is written in golang you can make use of goroutines and use channels to communicate between these goroutines.
As a leading and fast-growing payments and API banking solutions company, Cashfree Payments provides a full stack payments solution enabling businesses in India to collect payments and make payouts via all available methods with a single integration. We have http servers running that serve our merchant partners. These web servers by default are concurrent and several instances of a server run in our k8s. When there is a high load we see a lot of transactions or refunds being received. Our systems are designed in such a way that they are resilient and ensure that the database or cache operations are atomic. We do this using queues and optimistic and pessimistic locking. We have also designed our APIs in such a way that they take in the idempotency key as part of the header to ensure parallel requests for the same operation are not executed twice. We also use concurrency to generate our reports.
How Cashfree Payments Achieves Concurrency and Parallelism
In a Web Server
At Cashfree Payments, our tech is run by hundreds of web servers in our kubernetes cluster. Each of these web servers is equipped to handle thousands of concurrent requests. Each web server also has multiple instances running. Some of the most commonly used server frameworks that we also use to build our backend applications are:
- Labstack echo for Go
- Spring Boot for Java
- Express for NodeJS
At the Code level
While in the case of web server concurrency and parallelism comes out of the box, that is not the case when you want to achieve high throughput for heavier tasks. At Cashfree Payments, we rely on various techniques depending on the language. An example for a simple yet heavy job can be the generation of reports.
The tools and techniques that we normally use are:
- Go routines, Go channels, JAVA threads for intra server concurrency
- SQS, SNS and Kafka for inter servers concurrency
When Things Go South
With high concurrency and parallelism comes the cost of handling atomicity, deadlock, starvation and race condition. Suppose Super11 is using our APIs and they use Cashfree Payments to send money to their customers after a match completes. Now consider they have a mechanism to retry requests. Suppose 100K payouts have to be made but since Cashfree Payments needs its banking partner to make the actual transfer, there might be a situation where the banking partner might have their own API limits. This might lead to a lot of open connections which would lead to starvation.
Similar scenarios might happen when a bad query runs in a database.
Cashfree Payments puts a lot of effort into never running into these kinds of problems:
- Depending on the type of requests it can be moved to an async call. It is important to mention that all async calls have some sort of retries implemented.
- Having a producer consumer workflow
- Using distributed locks
These kinds of efforts are part of the reason why it has been such a fun and interesting journey at Cashfree Payments. We are excited for all our future experiments as we seek to create seamless, secure and strong payments and API banking infrastructure for India.
If this sounds like something that hits all the right spots, cerebrally speaking, then we have some exciting opportunities lined up for engineers just like you!