Flashcards for topic Lambdas and Streams
What are the key differences between method references and lambdas, and when should you prefer one over the other?
Method references provide a more concise alternative to lambdas when referencing existing methods:
Differences:
When to prefer method references:
// Method reference: clean and concise words.sort(comparingInt(String::length)); // Lambda: more verbose for this case words.sort((s1, s2) -> Integer.compare(s1.length(), s2.length()));
When to prefer lambdas:
// Lambda preferred here (method name is excessive) service.execute(() -> action()); // vs less readable reference service.execute(GoshThisClassNameIsHumongous::action);
Rule of thumb: Where method references are shorter and clearer, use them; where they aren't, stick with lambdas.
What is the key distinction between bound and unbound instance method references in Java 8, and when would you use each?
Distinction between bound and unbound instance method references:
Bound Method References:
objectInstance::instanceMethod
Instant.now()::isAfter
t -> Instant.now().isAfter(t)
Unbound Method References:
ClassName::instanceMethod
String::toLowerCase
str -> str.toLowerCase()
When to use each:
Use bound references when:
Use unbound references when:
What are the three primary forms of the toMap collector and when would you use each one?
The three forms of toMap collector:
Basic form: toMap(keyMapper, valueMapper)
Map<String, Operation> stringToEnum = Stream.of(values()) .collect(toMap(Object::toString, e -> e));
Merge function form: toMap(keyMapper, valueMapper, mergeFunction)
// Last-write-wins policy toMap(keyMapper, valueMapper, (v1, v2) -> v2) // Find maximum value for each key Map<Artist, Album> topHits = albums.collect( toMap(Album::artist, a->a, maxBy(comparing(Album::sales))));
Map factory form: toMap(keyMapper, valueMapper, mergeFunction, mapFactory)
toMap(keyMapper, valueMapper, mergeFunction, TreeMap::new)
How do you choose the appropriate return type for methods that return sequences of elements in Java, and what are the trade-offs?
Guidelines for choosing sequence return types in Java:
Collection interface (preferred when applicable)
Iterable interface
Array
Stream
Collection AND Stream
// Clients can either iterate or stream as needed public List<Element> getElements() { ... } // Usage: for (Element e : obj.getElements()) { ... } // Iteration obj.getElements().stream()... // Streaming
Recommendation: Prefer Collection over Stream as a return type when possible, as it allows clients to choose their processing model.
Trade-off: If providing both would be expensive, choose based on the most likely use case.
What are the most significant limitations of using Collection
as a return type compared to Stream
or Iterable
?
Limitations of Collection as return type:
Size constraints:
Memory requirements:
Implementation complexity:
Practical limitations:
While Collection provides both iteration and streaming capabilities, these limitations make Stream or Iterable preferable for very large or infinite sequences.
What is the "locality of reference" concept and why is it critical for effective parallelization of Java streams?
Locality of reference:
Definition: The property where data elements that are accessed together are also stored physically close together in memory.
Why it's critical for parallel streams:
Memory access efficiency:
Impact on parallelization:
Data structures with best locality:
Practical implications:
Optimizing for locality:
Good locality of reference can often be the difference between significant speedups and disappointing slowdowns when parallelizing streams.
How does the BigInteger.isProbablePrime(int certainty)
method work, and why is it particularly suitable for parallelization?
BigInteger.isProbablePrime(int certainty)
works as follows:
certainty
parameter (50 in the example) determines how many rounds of testing to performSuitable for parallelization because:
These properties create an "embarrassingly parallel" problem where performance scales almost linearly with available cores.
How do you implement bidirectional adapter methods between Stream and Iterable in Java, and what are the key considerations for using each adapter?
public static <E> Iterable<E> iterableOf(Stream<E> stream) { return stream::iterator; }
public static <E> Stream<E> streamOf(Iterable<E> iterable) { return StreamSupport.stream(iterable.spliterator(), false); }
stream::iterator
) to return the stream's iterator methodStreamSupport.stream()
with the iterable's spliteratorfalse
) in StreamSupport.stream()
indicates non-parallel processing// Stream to Iterable for (ProcessHandle p : iterableOf(ProcessHandle.allProcesses())) { // Process each handle with for-each syntax } // Iterable to Stream List<String> names = new ArrayList<>(); streamOf(names) .filter(name -> name.startsWith("A")) .map(String::toUpperCase) .forEach(System.out::println);
What principles should guide your selection between standard and custom functional interfaces when designing Java 8+ APIs that accept functions?
Default approach: Strongly prefer standard interfaces (java.util.function
) unless you have compelling reasons not to.
Selection process for standard interfaces:
UnaryOperator<T>
or BinaryOperator<T>
Predicate<T>
or BiPredicate<T,U>
Function<T,R>
or BiFunction<T,U,R>
Consumer<T>
or BiConsumer<T,U>
Supplier<T>
Predicate.and()
, Function.compose()
)Only create custom functional interfaces when:
@FunctionalInterface
// BAD: Unnecessary custom interface @FunctionalInterface interface EldestEntryRemovalFunction<K,V> { boolean remove(Map<K,V> map, Map.Entry<K,V> eldest); } // GOOD: Standard interface for the same purpose BiPredicate<Map<K,V>, Map.Entry<K,V>> // Usage remains intuitive cacheWithEviction((map, entry) -> map.size() > 100);
What are the comprehensive risks and failure modes of parallelizing Java stream pipelines, and what specific conditions lead to these issues?
Performance Degradation
Stream.iterate()
operationslimit()
intermediate operationsCorrectness Failures
Liveness Failures
System-Wide Impact
Best Practice: Always benchmark performance before/after parallelization under realistic conditions, and verify correctness with thorough testing across multiple runs.
Showing 10 of 61 cards. Add this deck to your collection to see all cards.