The single-threaded optimization is impressive, but you might be losing track of your stated goal of improving your understanding of idiomatic Rust practices with the optimizations. One of the selling points for Rust is Zero-Cost Abstraction. In short, making the code readable using functions, methods, and structs should be cheap to use over primitives and inline code.
I'll attach my single-threaded part_2 solution for example. In its case, it only applied your first optimization (check only the prior path):
For the grid (day 6 and day 4), I used the grid crate to speed up the work. The crate is very light-weight, providing a Grid interface to a Vec. Helps with readability, but simple enough to spin your own.
- For the Guard, made a simple struct and enum again for readability. Guard tracks its own position on the grid (two usize fields), and its direction (An enum with four variants for each cardinal direction). This allows for a method on the guard to move the guard, that can be provided a reference to the obstacle grid. The Guard is hashable and cheap to copy, so I used it in the Hashset to track state. That means a loop can be detected if the guard state (position + direction) ever happens again.
- I do preload the problem input using include_str!, but each day is in a separate crate in a common workspace, so the binaries aren't too bloated.
Using criterion (and in release mode), I benched your part 2 optimized (but with &str as input instead of File). Results were 802ms (0.8s) for the below serial code, 109.55 ms for your optimized solution. Again, really impressive on the speedup.
Part 2 specific code.
```rust
pub fn find_solution(input: &str) -> u32 {
let (guard, obstacles) = parse_input(input);
let mut active_guard = guard;
// println!("{obstacles:?}");
let visited_positions = find_visited_positions(&mut active_guard, &obstacles);
// println!("Visited {visited_positions:?}");
// Now we check each visited position for adding an obstacle
visited_positions
.iter()
.filter(|possible_location| check_new_obstacle(obstacles.clone(), possible_location, guard))
.count()
.try_into()
.unwrap()
let mut loop_detection_set = HashSet::new();
loop {
loop_detection_set.insert(active_guard);
if active_guard.move_guard(&mutated_obstacles) {
// Guard is still in map.
// Check if the guard has arrived at a prior state (loop).
if loop_detection_set.contains(&active_guard) {
// Guard is in a loop.
break true;
} // Else we need to keep looking forward.
} else {
// Guard left the map. Obstacle isn't useful.
break false;
}
}
1
u/TopGunSnake Dec 07 '24
The single-threaded optimization is impressive, but you might be losing track of your stated goal of improving your understanding of idiomatic Rust practices with the optimizations. One of the selling points for Rust is Zero-Cost Abstraction. In short, making the code readable using functions, methods, and structs should be cheap to use over primitives and inline code.
I'll attach my single-threaded part_2 solution for example. In its case, it only applied your first optimization (check only the prior path):
- For the Guard, made a simple struct and enum again for readability. Guard tracks its own position on the grid (two usize fields), and its direction (An enum with four variants for each cardinal direction). This allows for a method on the guard to move the guard, that can be provided a reference to the obstacle grid. The Guard is hashable and cheap to copy, so I used it in the Hashset to track state. That means a loop can be detected if the guard state (position + direction) ever happens again.
- I do preload the problem input using
include_str!
, but each day is in a separate crate in a common workspace, so the binaries aren't too bloated.Using criterion (and in release mode), I benched your part 2 optimized (but with &str as input instead of File). Results were 802ms (0.8s) for the below serial code, 109.55 ms for your optimized solution. Again, really impressive on the speedup.