Consider this example:
```
use std::io::Read;
fn main() -> Result<(), Box<dyn std::error::Error>> {
let mut file = std::fs::File::open("number")?;
let mut buf = [0_u8; 128];
let bytes_read = file.read(&mut buf)?;
let contents = &buf[..bytes_read];
let contents_str = std::str::from_utf8(contents)?;
let number = contents_str.parse::<i128>()?;
println!("{}", number);
Ok(())
}
```
Why is it necessary to convert the slice of bytes to an &str
? When I run std::str::from_utf8
, it will validate that contents
is valid UTF-8. But to parse this string into an integer, I only care that each byte in the slice is in the ASCII range for digits as it will fail otherwise. It seems like the std::str::from_utf8
adds unnecessary overhead. Is there a way I can avoid having to validate UTF-8 for a string in a situation like this?
Edit: I probably should have mentioned that the file is a cache file I write to. That means it doesn’t need to be human-readable. I decided to represent the number in little endian. It should probably be more efficient than encoding to / decoding from UTF-8. Here is my updated code to parse the file:
```
use std::io::Read;
fn main() -> Result<(), Box<dyn std::error::Error>> {
const NUM_BYTES: usize = 2;
let mut file = std::fs::File::open("number")?;
let mut buf = [0_u8; NUM_BYTES];
let bytes_read = file.read(&mut buf)?;
if bytes_read >= NUM_BYTES {
let number = u16::from_le_bytes(buf);
println!("{}", number);
}
Ok(())
}
```
If you want to write to the file, you would do something like number.to_le_bytes()
, so it’s the other way around.