Synchronizing is the "key"
Through my kernel studies I found very good stuff about synchronization and I want to share with you.
This first post tend to be only conceptual, I'll write another to deal with the tools that Linux provides for us to synchronize accesses.
In a shared memory application, care must be taken to ensure that shared resources are protected from concurrent access.
Concurrent access of shared data is a recipe for instability!
Properly protecting shared resources can be tough.
When SMP was introduced in 2.0 kernel, the need for synchronizing was raised.
As SMP implies that kernel code can simultaneously run on two or more processors, consequently, without protection, code in the kernel, running on two different processors, can simultaneously access shared data at exactly the same time.
With the introduction of 2.6 kernel, the Linux Kernel became preemptive, this implies that the scheduler can preempt kernel code at virtually any point and reschedule another task.
Critical Regions
Code path that access and manipulate shared data are called critical regions.
It is usually unsafe for multiple threads of execution to access the same resource simultaneously.
To prevent concurrent access, the programmer must ensure that the code executes atomically, that is, the code completes without interruption as if the entire critical region were one indivisible instruction.
For a first example, lets say that we have a cash machne that withdraws money from a person's personal bank account.
After the user has asked for a specific amount of money, the cash machine need to ensure that the money actually exists in that user's account. If the money exists, it then needs to deduct the withdrawal from the total funds available.
Let's presume that another deduction in the user's funds is happening at the same time.
Assume that the user's spouse is initiating another withdrawal at another ATM, or electronically transferring funds out of the account, or the bank is deducting a fee from the account.
These systems would have similar code:
Assume the customer has a total of $105 in the bank, one of these transaction cannot correctly complete without sending the account into the red.
We would expect something like this:
Now, the sync fail...
Assume that the two transaction are initiated at roughly the same time.
Both transactions verify that sufficient funds exist: $105 is more than both $100 and $10, so all is good.
Then the withdrawal process subtracts $100 from $105, yielding $5.
The fee transaction then does the same, subtracting $10 from $105 and getting $95.
The withdrawal process updates the user's new total available funds to $5.
Now te fee transaction also udates the new total, resulting in $95.
Free money!!
The bank system must lock the account during certain operations, making each transaction atomic.
Such transactions must occur in their entirety, without interruption, or not occur at all.
Locking
A lock provides a mechanism much like a door. Imagine the room beyond the door as the critical region.
Inside the room, only one thread of execution can be present at a given time. When a thread enters the room, it locks the door behind it.
When the thread is finished manipulating the shared data, it leaves the room and unlocks the door.
If another thread reaches the door while it is locked, it must wait for the thread inside to exit the room and unlock the door before it can enter.
Thread hold locks, locks protect data.
Locks are entirely a programming construct that the programmer must take advantage of.
Locks come in various shapes and sizes, Linux alone implements a handful of different locking mechanisms. The difference among them is the behavior when the lock is contended (already in use), some locks simply busy way (spin in a tight loop, waiting for the lock to become available), whereas other locks put the current task to sleep until the lock becomes available.
Another important thing about locks is that they are implemented using atomic operations, that ensure no race exists.
Deadlocks
A deadlock is a condition involving one or more threads of execution and one or more resources, such that each thread is waiting for one of the rsources, but all the resources are already held.
A good analogy is a four-way traffic stop. If each car at the stop decides to wait for the other cars before going, no car will ever go and we have a traffic deadlock.
Consider n threads and n locks. If each thread holds a lock that the other thread wants, all threads block while waiting for their respective locks to become available. The most common example is with two threads and two locks, which is often called deadly embrace.
Each thread is waiting for the other and neither thread will ever release its original lock; therefor, neither lock will ever become available.
So to prevent this scenarios we could do some steps:
Assume you have the cat, dog and fox lock that protect data structures of the same name.
Now assume you have a function that needs to work on all three of these data structures simultaneously, perhaps to copy data between them.
The data structures require locking to ensure safe access. If one function acquires the locks in the order cat, dog and fox, then every other function must obtain these locks in this same order.
For example, it is a potential deadlockto first obtain the fox lock, and then obtain the dog lock (because the dog mus always be aquired prior to the fox lock).
Thread one is waiting for the fox lock, which thread two holds, while thread two is waiting for the dog lock, which thread one holds. Neither ever release its lock and hence both wait forever.
If locks were always obtained in the same order, a deadlock in this manner would not be possible.
Whenever locks are nested within other locks, a specific ordering mus be obeyed.
A common practice to release locks is to release it in an order inverse to that in which they were acquired.
This first post tend to be only conceptual, I'll write another to deal with the tools that Linux provides for us to synchronize accesses.
In a shared memory application, care must be taken to ensure that shared resources are protected from concurrent access.
Concurrent access of shared data is a recipe for instability!
Properly protecting shared resources can be tough.
When SMP was introduced in 2.0 kernel, the need for synchronizing was raised.
As SMP implies that kernel code can simultaneously run on two or more processors, consequently, without protection, code in the kernel, running on two different processors, can simultaneously access shared data at exactly the same time.
With the introduction of 2.6 kernel, the Linux Kernel became preemptive, this implies that the scheduler can preempt kernel code at virtually any point and reschedule another task.
Critical Regions
Code path that access and manipulate shared data are called critical regions.
It is usually unsafe for multiple threads of execution to access the same resource simultaneously.
To prevent concurrent access, the programmer must ensure that the code executes atomically, that is, the code completes without interruption as if the entire critical region were one indivisible instruction.
For a first example, lets say that we have a cash machne that withdraws money from a person's personal bank account.
After the user has asked for a specific amount of money, the cash machine need to ensure that the money actually exists in that user's account. If the money exists, it then needs to deduct the withdrawal from the total funds available.
Let's presume that another deduction in the user's funds is happening at the same time.
Assume that the user's spouse is initiating another withdrawal at another ATM, or electronically transferring funds out of the account, or the bank is deducting a fee from the account.
These systems would have similar code:
- Check whether the deduction is possible
- Compute the new total funds
- Execute the physical deduction
Assume the customer has a total of $105 in the bank, one of these transaction cannot correctly complete without sending the account into the red.
We would expect something like this:
- The fee transaction happens
- Ten dollars is less than $105, so ten is subtracted from 105 to ge a new total of 95
- $10 is pocketed by the bank
Now, the sync fail...
Assume that the two transaction are initiated at roughly the same time.
Both transactions verify that sufficient funds exist: $105 is more than both $100 and $10, so all is good.
Then the withdrawal process subtracts $100 from $105, yielding $5.
The fee transaction then does the same, subtracting $10 from $105 and getting $95.
The withdrawal process updates the user's new total available funds to $5.
Now te fee transaction also udates the new total, resulting in $95.
Free money!!
The bank system must lock the account during certain operations, making each transaction atomic.
Such transactions must occur in their entirety, without interruption, or not occur at all.
Locking
A lock provides a mechanism much like a door. Imagine the room beyond the door as the critical region.
Inside the room, only one thread of execution can be present at a given time. When a thread enters the room, it locks the door behind it.
When the thread is finished manipulating the shared data, it leaves the room and unlocks the door.
If another thread reaches the door while it is locked, it must wait for the thread inside to exit the room and unlock the door before it can enter.
Thread hold locks, locks protect data.
Locks are entirely a programming construct that the programmer must take advantage of.
Locks come in various shapes and sizes, Linux alone implements a handful of different locking mechanisms. The difference among them is the behavior when the lock is contended (already in use), some locks simply busy way (spin in a tight loop, waiting for the lock to become available), whereas other locks put the current task to sleep until the lock becomes available.
Another important thing about locks is that they are implemented using atomic operations, that ensure no race exists.
Deadlocks
A deadlock is a condition involving one or more threads of execution and one or more resources, such that each thread is waiting for one of the rsources, but all the resources are already held.
A good analogy is a four-way traffic stop. If each car at the stop decides to wait for the other cars before going, no car will ever go and we have a traffic deadlock.
Consider n threads and n locks. If each thread holds a lock that the other thread wants, all threads block while waiting for their respective locks to become available. The most common example is with two threads and two locks, which is often called deadly embrace.
Each thread is waiting for the other and neither thread will ever release its original lock; therefor, neither lock will ever become available.
So to prevent this scenarios we could do some steps:
- Lock ordering is vital. Nested locks must always be obtained in the same order
- Prevent starvation. "If foo does not occur, will bar wait forever?"
- Do not double acquire the same lock
- Design for simplicity
Assume you have the cat, dog and fox lock that protect data structures of the same name.
Now assume you have a function that needs to work on all three of these data structures simultaneously, perhaps to copy data between them.
The data structures require locking to ensure safe access. If one function acquires the locks in the order cat, dog and fox, then every other function must obtain these locks in this same order.
For example, it is a potential deadlockto first obtain the fox lock, and then obtain the dog lock (because the dog mus always be aquired prior to the fox lock).
Thread one is waiting for the fox lock, which thread two holds, while thread two is waiting for the dog lock, which thread one holds. Neither ever release its lock and hence both wait forever.
If locks were always obtained in the same order, a deadlock in this manner would not be possible.
Whenever locks are nested within other locks, a specific ordering mus be obeyed.
A common practice to release locks is to release it in an order inverse to that in which they were acquired.