ProMind
SearchFor TeachersFor Parents
ProMind
Privacy PolicyTerms of ServiceRefund Policy

© 2025 DataGrid Softwares LLP. All rights reserved.

    Concurrency

    Flashcards for topic Concurrency

    Intermediate48 cardsGeneral

    Preview Cards

    Card 1

    Front

    What solution pattern should be used to prevent ConcurrentModificationException and deadlocks when notifying observers, and how is it implemented?

    Back

    Solution: The "snapshot" pattern

    Implementation:

    private void notifyElementAdded(E element) { // Create a snapshot copy outside of synchronized block List<SetObserver<E>> snapshot = null; synchronized(observers) { snapshot = new ArrayList<>(observers); } // Iterate over the snapshot without holding the lock for (SetObserver<E> observer : snapshot) observer.added(this, element); }

    This pattern:

    1. Creates a defensive copy of the collection while holding the lock
    2. Releases the lock before performing potentially long-running operations
    3. Allows concurrent modifications to the original collection during notification
    4. Prevents both ConcurrentModificationException and deadlocks
    Card 2

    Front

    What is an "open call" in concurrent programming, and why should you prefer it?

    Back

    An open call is a method invocation made outside of a synchronized region.

    Benefits:

    • Prevents deadlocks by avoiding nested lock acquisition
    • Increases concurrency by minimizing the duration locks are held
    • Allows other threads to access shared resources during potentially long operations
    • Prevents reentrant locking issues with callbacks

    Implementation pattern:

    1. Acquire lock
    2. Examine/copy shared data as quickly as possible
    3. Release lock
    4. Perform time-consuming operations or calls to untrusted code without holding the lock

    This is a fundamental principle for achieving high-performance thread-safe code.

    Card 3

    Front

    What does "safe publication" mean in concurrent programming and what are the ways to achieve it?

    Back

    Safe publication refers to safely making an object visible to other threads in a way that guarantees the object's state is fully initialized and visible.

    Objects shared without safe publication might appear partially constructed to other threads.

    Ways to safely publish an object:

    1. Store in a static field during class initialization
    2. Store in a volatile field
    3. Store in a final field
    4. Store in a field accessed with normal locking (synchronized methods/blocks)
    5. Put into a concurrent collection (ConcurrentHashMap, CopyOnWriteArrayList, etc.)
    6. Transfer via a BlockingQueue or Exchanger

    Safe publication is especially important for objects that will be effectively immutable (constructed, then never modified again) but aren't technically immutable classes.

    Example:

    // Safe publication using volatile private static volatile SafeObject instance; public static SafeObject getInstance() { if (instance == null) { synchronized(SafeObject.class) { if (instance == null) { instance = new SafeObject(); // Safely published via volatile } } } return instance; }
    Card 4

    Front

    Describe the Lazy Initialization Holder Class idiom. When is it appropriate to use, and why is it considered efficient?

    Back

    The Lazy Initialization Holder Class idiom is a thread-safe pattern for initializing static fields:

    // Lazy initialization holder class idiom for static fields private static class FieldHolder { static final FieldType field = computeFieldValue(); } private static FieldType getField() { return FieldHolder.field; }

    When to use:

    • Only for static field lazy initialization
    • When performance optimization is needed
    • To break initialization circularities

    Why it's efficient:

    • Exploits JVM's class initialization guarantees (JLS 12.4.1)
    • No explicit synchronization in the access method
    • FieldHolder class is only initialized when first accessed
    • After initialization, field access has no synchronization overhead
    • JVM handles synchronization during class initialization internally
    • The VM typically patches code after initialization to avoid synchronization on subsequent accesses

    This idiom is considered the most elegant approach for lazy initialization of static fields as it combines thread safety with minimal performance impact.

    Card 5

    Front

    What are the five levels of thread safety for classes? For each level, explain its meaning and provide an example.

    Back

    The five levels of thread safety:

    1. Immutable

      • Meaning: Instances appear constant; no external synchronization needed
      • Objects cannot change state after construction
      • Examples: String, Long, BigInteger
    2. Unconditionally thread-safe

      • Meaning: Instances are mutable but have sufficient internal synchronization
      • Can be used concurrently without external synchronization
      • Examples: AtomicLong, ConcurrentHashMap
    3. Conditionally thread-safe

      • Meaning: Some methods require external synchronization for safe concurrent use
      • Most methods are thread-safe, but certain sequences require external synchronization
      • Examples: Collections returned by Collections.synchronized wrappers (iterators require synchronization)
    4. Not thread-safe

      • Meaning: Instances are mutable and require external synchronization for concurrent use
      • Clients must synchronize all method access
      • Examples: ArrayList, HashMap, most collection implementations
    5. Thread-hostile

      • Meaning: Unsafe for concurrent use even with external synchronization
      • Usually caused by modifying static data without synchronization
      • Examples: Classes with static fields modified without proper synchronization
      • These are typically fixed or deprecated once identified

    Understanding these levels is crucial for properly documenting a class's thread safety properties and for clients to use the class correctly.

    Card 6

    Front

    What is the standard idiom for using the wait method, and why must wait always be used inside a loop? Explain the consequences of not following this pattern.

    Back

    Standard wait method idiom:

    // The standard idiom for using the wait method synchronized (obj) { while (<condition does not hold>) obj.wait(); // Releases lock, and reacquires on wakeup // Perform action appropriate to condition }

    Why wait must be used in a loop:

    1. Testing before waiting (protects liveness):

      • If condition already holds, skips unnecessary waiting
      • Prevents deadlock if notify was already called before waiting
      • No guarantee the thread will ever wake if notify happened earlier
    2. Testing after waiting (protects safety):

      • Thread might wake up when condition still doesn't hold
      • Acting on the expectation that condition is true could corrupt invariants

    Consequences of not using a loop:

    • Potential deadlocks (thread might never wake)
    • Race conditions (condition might change between notify and wakeup)
    • Corruption of program state by proceeding when condition is false
    • Vulnerability to spurious wakeups (threads can wake without notification)

    Reasons a thread might wake when condition doesn't hold:

    • Another thread changed state between notification and wakeup
    • Accidental/malicious notifications on publicly accessible objects
    • Overly "generous" notifications (notifyAll waking all threads)
    • Spurious wakeups (rare but possible JVM behavior)
    Card 7

    Front

    When implementing concurrent timing with CountDownLatch, what potential thread starvation issue must be considered, and what causes it?

    Back

    When implementing concurrent timing with CountDownLatch, you must be aware of thread starvation deadlock:

    The issue:

    • The executor passed to the timing method must provide enough threads to meet the concurrency level
    • If insufficient threads are available, the test will never complete (deadlock)

    Specific deadlock scenario:

    • If concurrency parameter is set to N
    • But executor can only provide fewer than N threads
    • Some CountDownLatch calls will never be reached
    • The timer thread will wait indefinitely for latch counts that can never reach zero

    Example:

    // This will deadlock if concurrency > executor's max threads public static long time(Executor executor, int concurrency, Runnable action) { CountDownLatch ready = new CountDownLatch(concurrency); // Additional code... ready.await(); // Deadlocks if not enough threads to count down // ... }

    Prevention:

    • Ensure executor can support at least as many threads as the concurrency level
    • Consider using bounded thread pools with queue sizes that match expected workload
    • Implement timeouts on latch waits for production code
    • Document this requirement for users of the timing framework
    Card 8

    Front

    Compare and contrast the use of notify vs. notifyAll. When is it safe to use notify instead of notifyAll, and what risks does this optimization introduce?

    Back

    notify vs. notifyAll:

    | notify | notifyAll | |--------|-----------| | Wakes a single waiting thread | Wakes all waiting threads | | More efficient (fewer thread wakeups) | Less efficient (unnecessary wakeups) | | Risk of leaving intended recipients waiting | Guaranteed to wake all necessary threads | | Requires careful reasoning about wait-sets | Conservative and always correct |

    When it's safe to use notify (optimization conditions):

    • All threads in the wait-set are waiting for the exact same condition
    • Only one thread at a time can benefit from the condition becoming true
    • You have complete control over all code that could wait on the object

    Risks introduced by using notify:

    • "Notification stealing" - an unintended thread receives the notification
    • Critical notifications might be "swallowed" by unrelated threads
    • If conditions change, notify might wake the wrong thread
    • In public/accessible objects, malicious code could add waits that intercept notifications

    Best Practice: Use notifyAll by default as a safer approach. Only use notify as a deliberate optimization after careful analysis of all possible waiter threads and their conditions.

    Example of notify risk:

    // Thread A and B wait on condition X // Thread C waits on condition Y // All wait on the same object synchronized(obj) { // If we notify() when X becomes true // Thread C might wake instead of A or B // A and B might never wake }
    Card 9

    Front

    How does the primitive field handling differ in lazy initialization patterns compared to reference fields?

    Back

    For primitive fields in lazy initialization:

    1. The null check becomes a comparison against 0 (default value for numerical primitives)

      // Instead of checking against null: if (numericField == 0) // For primitives
    2. For primitives other than long/double that can tolerate repeated initialization, you can use the racy single-check idiom (removing volatile)

    3. Primitive initialization must handle the ambiguity that the default value (0) might be a valid computed value, unlike reference fields where null clearly indicates uninitialized state

    4. Atomic operations and memory barriers work differently for primitives vs. references, requiring special attention to memory consistency effects

    This creates additional complexity when adapting reference-based idioms to primitive fields.

    Card 10

    Front

    What makes thread priorities problematic for solving concurrency issues, and what is their appropriate use case?

    Back

    Problems with thread priorities:

    1. Least portable feature in Java's concurrency toolkit
    2. Different OS/JVM implementations handle priorities inconsistently
    3. Priority inversion problems can occur
    4. Underlying concurrency issues remain unresolved
    5. Timing-dependent bugs may temporarily disappear but later resurface

    Incorrect use:

    • Attempting to fix liveness problems
    • Making non-working code "barely work"
    • Using as a primary synchronization mechanism
    • Relying on for program correctness

    Appropriate use (limited):

    • Fine-tuning the responsiveness of an already working application
    • Used sparingly to improve quality of service
    • Applied after proper synchronization mechanisms are in place
    • Only as a performance optimization, never for correctness

    Thread priorities should be considered hints to the scheduler, not guarantees of execution order or timing.

    Showing 10 of 48 cards. Add this deck to your collection to see all cards.