Writeup

From smart pthread
Jump to: navigation, search

Summary: We created a wrapper around several mutex operations, including lock, rwlock, barrier, and thread_join. With this wrapper, we implemented deadlock detection on these mutexes, using graph algorithms to warn a user when their code leads to a deadlock.

Background: Our wrapper around mutex operations, which we called “Smart Pthreads”, is designed in such a way that it can warn users when deadlock occurs. Deadlock is a problem that comes up somewhat frequently in parallel programming, and it is a large problem when it does come up. Due to the fact that deadlocks frequently are due to race conditions and also will fail silently, simply causing a program to stop, it can be hard to detect such a deadlock. However, although it is hard for a user to detect a deadlock once it has already occurred, it is decidedly possible to have wrappers around mutexes which are able to detect whether this has happened, as this problem reduces to online cycle detection in a graph. This will be discussed more in the approach.

The way that our code is used is, if someone wants to use it, they may compile their code with our wrapper around the locks, and the resulting code will be set up to detect any deadlocks. If a deadlock occurs during the execution of the program, then our implementation will notice that this would occur and print a warning that deadlock has occurred and how it happened. At this point, the program will execute as opposed to waiting for an interrupt. Thus, a user will know immediately that their code has deadlocked, so they can use this information to help debug their code.

Approach: Our implementation was designed as a wrapper for mutex functions in C. The implementation is generic for what machines it can be used on, and can be used in any context as a debugging tool as long as the programmer has some concurrency and is working with C.

Our implementation uses the fact that deadlock can be reduced to online graph cycle detection. If a deadlock occurs, then that means that we have some thread which requires some sort of mutex to proceed, which is held by some thread, which needs some mutex, and so on, eventually looping back to the original thread. If it is not of this form, then the last thread in this chain can make progress in some way, so we are not deadlocked. Thus, we just need the ability to do online cycle detection. However, there has been a fair amount of research in this area. We were able to find a paper which describes an efficient way to find if a cycle exists in a graph. This implementation is as follows.

For the graph that we have, we keep a topological sort on the vertices, so we have an ordering such that any vertex that is the child of another vertex will have an order strictly larger than that of its parent. Then, for any edges that we are added, which can happen because a thread tried to pick up a mutex and failed or because it succeeded, we add that edge to the graph. If it goes the wrong way back up the list, then we know a cycle may be present, otherwise we know already that this cannot deadlock. If a cycle may be present, we can determine for sure by depth first searching from this vertex. If a cycle exists, we then return, give the user a warning, and return from the program. If no cycle exists, we must reorder the topological ordering, as we have an edge going the wrong way, but this can be done by shifting the values that are between these two vertices.

Thus, we are able to tell if a cycle exists in our code, and thus we can tell if the code has entered a deadlock at any point, so the user will be informed immediately should a deadlock occur, and will even be informed how the deadlock happened. However, this was not our original approach. Originally, we were extremely overoptimistic, attempting to undo any changes made since the last lock had been obtained and return to that spot, so that the code could continue to run. However, we soon discovered this was nearly impossible, and if implemented correctly, would basically just be transactional memory.

From here, we moved to trying to have a design where, upon deadlocking, one thread would return to when it had no locks, and the others could then proceed from there. Without the changes being undone along the way, this would be easier to accomplish than the previous implementation, as we would not need to reacquire any locks freed to undo changes. However, there was an edge case in this still, where if thread 1 locks mutex A, then thread 2 locks mutex B, then both get to a barrier, then both try to lock the other mutex, no progress can be made. Under these assumptions, we would have one back up, then the other would be stuck at the mutex until one gets to the barrier, then deadlock would exist again. Through this, then, we would have a livelock condition, which is just as bad as the deadlock was. Thus, we reevaluated our design, and decided it would be best to simply have something that would be useful for debugging. Thus, we decided on code that would simply announce when deadlock occurred instead of attempting to fix it by itself. This is still an improvement, although not as drastic of one as we had originally (and overly optimistically) hoped for.

Results:

We have a working implementation of the deadlock detection. We developed a simple piece of code creates two threads and loops over them repeatedly, where one thread locks lock A and then lock B, while the other locks lock B and then lock A. They both read a shared counter, print the value, and increment the value in order to waste time and demonstrate that they are doing work. By doing this, it is very likely to deadlock, and testing this without our implementation showed that it frequently did.

However, when running this with our deadlock detection software, we immediately can see that the code has deadlocked and a textual representation of the cycle in the graph. Through this, we can see that our code works in announcing to us whether deadlock has occurred. Although our code is necessarily slower than typical mutexes are, it is still very close to this in time, and is able to terminate in the case of a deadlock, so in this case the speedup is as large as possible, as we go from infinite runtime to a finite runtime, although it terminates in an error instead of with the desired solution. However, it should be noted that the runtime is not that much of an issue, as this is mainly a way to help with debugging. Since the main use is for debugging, what is more important is that we accurately show that there is a deadlock than that it occurs quickly.

References:

Deadlocking test code is based loosely on this code - http://www.thegeekstuff.com/2012/05/c-mutex-examples/

Our Cycle Detection is heavily based on the psuedocode mentioned in this paper – David J. Pearce and Paul H.J. Kelly. A Dynamic Batch Algorithm for Maintaining a Topological Order.