Checkpoint

From smart pthread
Jump to: navigation, search

Work Completed[edit]

Cycle detection has been programmed, so we are able to investigate whether a series of threads and locks which tell us their dependencies is deadlocked. In addition, we have the data structures set up that are necessary for this. Additionally, we have deeply investigated the possibility of making reversions if the code is deadlocked, so that we are able to back out of the deadlock and then hopefully progress in the future. However, this is quite difficult, and in order to correctly implement it, a thread may need to reacquire a lock which it had previously let go of. We thought for a short time that this might itself deadlock, which would be terrible, as our deadlock avoidance should not deadlock. However, we have since proven that this can never deadlock based on the way we want to revert. Our method of reversion is based around going back to before the last lock or unlock in any of the threads that are currently deadlocked. This requires picking up a lock if the last access was an unlock. If it was not an unlock, we can simply back up past that lock. Thus, the only interesting case is that all threads that are deadlocked unlocked after their last lock. This would then deadlock if there was a set of these mutexes that all held a lock that another needed to revert. For example, thread 1 may need mutex A and hold mutex B, while thread 2 needs B and holds C, and so on, until a cycle is completed. However, this cannot happen. Since letting go of A was the last thing thread 1 did, it had to happen strictly after it picked up mutex B. This means thread 2 must have let go of B before thread 1 let go of A, and thread 3 let go of C before thread 2 let go of B. However, this continues for all elements of the cycle, so we find that thread 1 lets go of A strictly before it lets go of A, which is nonsensical, so we know that such a deadlock cannot occur.

However, even though because of this fact we know reversion will always be possible, it is still quite impractical. Formally, it is never possible to revert all changes made unless we have a lazy implementation that does nothing memory based until all locks have been removed, as otherwise, for the sake of example, we may have packets sent which cannot be unsent. Less formally, just see the work needed to even prove that it won’t cause its own deadlock. Thus, we have decided to refocus our project. Instead of attempting deadlock detection and reversion, we will now focus on deadlock detection for more types of mutexes. For example, it should be more interesting if we are able to have a program with read-write locks, standard locks, barriers and thread joins combining to deadlock than only a standard locks. In order to do that, we are basically doing an online concurrent cycle detection for a directed graphs. This topic, sans the concurrent part, is well covered in this paper http://www.cs.princeton.edu/~sssix/papers/dto-journal.pdf. It basic idea is to maintain a topological order of the graph to aid path searching when an new edge is added. We are planning to use an algorithm described in this paper and modify it for a concurrent operation.

Goals and deliverables – we have a way to detect cycles and thus deadlocks for standard locks. We also have a proof that trying to revert will not itself cause deadlock. However, we now no longer believe we will be able to produce a full reversion, as such a system would essentially make mutexes create a transactional memory system. Our goal instead is to simply be able to revert back to before we held the locks without actually changing anything besides the location in the program. In addition, we will develop deadlock avoidance for more types of mutexes to make up for the lack of reversion.

Thus, in list form

Goals[edit]

1. Have a way to detect when we have entered deadlock due to dependencies on several parts of the mutex builtins. 2. Be able to back out of deadlock in such a way that progress becomes possible. 3. Increase run speed of cycle detection.

Parallelism Competition[edit]

We plan on showing a demo of our mutexes at the parallelism competition by running code which will intentionally try to deadlock and demonstrate that, with our modifications, it is able to terminate, even though it may run slowly. No matter how slowly it runs, however, it is still faster than a deadlocked program, so we still have an improvement over the original code.

Revised Schedule[edit]

Week 3[edit]

Figure out how to implement the algorithm in a concurrent context. We would like to minimize the critical regions to achieve good performance. Start coding by the middle of the week

Week 4[edit]

Finish coding by the first half of the week. Rest of the week is either for debugging if we have problematic code, or improving the performance if our code works

Week 5[edit]

First half of the week for writing bigger and more complicated test programs and use them to test and benchmark our API. Second have of the week is for preparing for our presentation.