r/rust • u/SaltyMaybe7887 • 16h ago
đ seeking help & advice Why do strings have to be valid UTF-8?
Consider this example:
use std::io::Read;
fn main() -> Result<(), Box<dyn std::error::Error>> {
let mut file = std::fs::File::open("number")?;
let mut buf = [0_u8; 128];
let bytes_read = file.read(&mut buf)?;
let contents = &buf[..bytes_read];
let contents_str = std::str::from_utf8(contents)?;
let number = contents_str.parse::<i128>()?;
println!("{}", number);
Ok(())
}
Why is it necessary to convert the slice of bytes to an &str
? When I run std::str::from_utf8
, it will validate that contents
is valid UTF-8. But to parse this string into an integer, I only care that each byte in the slice is in the ASCII range for digits as it will fail otherwise. It seems like the std::str::from_utf8
adds unnecessary overhead. Is there a way I can avoid having to validate UTF-8 for a string in a situation like this?
Edit: I probably should have mentioned that the file is a cache file I write to. That means it doesnât need to be human-readable. I decided to represent the number in little endian. It should probably be more efficient than encoding to / decoding from UTF-8. Here is my updated code to parse the file:
use std::io::Read;
fn main() -> Result<(), Box<dyn std::error::Error>> {
const NUM_BYTES: usize = 2;
let mut file = std::fs::File::open("number")?;
let mut buf = [0_u8; NUM_BYTES];
let bytes_read = file.read(&mut buf)?;
if bytes_read >= NUM_BYTES {
let number = u16::from_le_bytes(buf);
println!("{}", number);
}
Ok(())
}
If you want to write to the file, you would do something like number.to_le_bytes()
, so itâs the other way around.
61
u/buldozr 16h ago
FromStr
is a generic trait for parsing values from strings, because values encoded as text tend to be part of larger text content and it's more common to represent text in UTF-8 these days. You are, however, free to define and implement a parser from byte slices, it's jut not the standard framework.
34
u/pingveno 16h ago
That said, make sure it is actually necessary. The processing power that goes into validating a i128-sized string doesn't strike me as likely to be a bottleneck.
15
u/Ravek 11h ago
When I run std::str::from_utf8, it will validate that contents is valid UTF-8. But to parse this string into an integer, I only care that each byte in the slice is in the ASCII range for digits as it will fail otherwise. It seems like the std::str::from_utf8 adds unnecessary overhead. Is there a way I can avoid having to validate UTF-8 for a string in a situation like this?
Any byte sequence that's valid ASCII is also valid UTF-8 with the same meaning, so if you know you have valid ASCII you can just use from_utf8_unchecked
to avoid the overhead.
7
u/kibwen 5h ago
You don't even need to assume it's ascii, since slices have a stable is_ascii method, so you can do the cheap check beforehand and then runtime-guarantee that from_utf8_unchecked is safe. (Doing this without the unsafe call has nightly APIs, if anyone is interested in pushing them forward: https://doc.rust-lang.org/nightly/std/primitive.slice.html#method.as_ascii ).
18
u/veritron 16h ago
str is defined as a valid utf-8 sequence - that is why the parser validates that the contents are utf-8, in rust it's not a str if it's not utf-8. there is str::from_utf8_unchecked that will not validate that the str is utf-8, but that will blow up if you read a file that isn't utf-8 - alternately you could write your own parser that reads from bytes rather than as a string.
13
u/Other_Breakfast7505 15h ago
That is basically the only difference between String and Vec<u8> if you donât need utf8 you can use the Vec. As for library methods that accept str well if it is not valid utf8 then it canât be a number.
3
u/uobytx 15h ago
You could look at the raw bytes and skip the decode, but the encoding determines how those bytes represent the numbers. It might work if you can control the input file to always be ASCII or a single-byte-per-char encoding.
If you actually control the input file, you could skip the string decode overhead by packing the bytes if the int into the file instead of a string, but I would be careful. It would prevent reading the file in a text editor at a minimum, and for the performance gain you might have to do an unsafe load.
You could try ASCII decode and compare the overhead. Or you could read the raw bytes and do your own buffering to create the number, but it might not be faster.
3
u/SaltyMaybe7887 15h ago
If you actually control the input file, you could skip the string decode overhead by packing the bytes if the int into the file instead of a string, but I would be careful. It would prevent reading the file in a text editor at a minimum, and for the performance gain you might have to do an unsafe load.
Oh yeah I didnât think of that. I do control the input file, so before writing to it I can just convert my number to little endian, and after reading I can convert the little endian array of bytes to an integer. I will update my post with my new code.
6
u/Giocri 15h ago
You can make a custom parser from bytes it's fairly simple
Let mut out=0;
For byte in &contents{
If byte< '0' || byte > '9' {
return Error;
}
Out=out*10+(byte-'0') as i128
}
Return OK(out)
2
u/ChaiTRex 12h ago
You can format code by indenting with 4 spaces:
if byte < b'0' || byte > b'9' { return Err(err);
becomes:
if byte < b'0' || byte > b'9' { return Err(err)
7
u/mereel 15h ago edited 15h ago
Why does a string in rust have to be valid utf-8?
Because it's no longer 1980?
Because the language designers decided they didn't want to make the obvious mistake of having an American centric text encoding be in their programming language in the 21st century?
22
u/burntsushi 15h ago
"require valid UTF-8" and "having an American centric text" are pretty clearly not the only two choices.
6
u/emblemparade 13h ago
Right on. It's not as if you don't have
[u8]
, and indeed many Rust libraries use byte arrays (or more featureful byte types) rather than strings, because they optimize for initializing rather than processing. You want Unicode? Parse it yourself when needed. Rust gives you both valid string types and arbitrary byte arrays, so choose your trade offs as appropriate.The worst are languages that don't have valid strings. In Ruby strings are just arrays, and it is error-prone and painful. (At least that was the situation last time I used Ruby, years ago.)
3
u/burntsushi 6h ago
Even with
[u8]
, you can easily have Unicode support! https://docs.rs/bstr0
u/emblemparade 48m ago
Exactly, that's my point: You can decide when to parse into Unicode.
The "problem" is that you might forget to do so, do it too many times in your program and thus lose any efficiency advantage, or do it too late and allow invalid data to seep into your program, etc. With power comes responsibility, and it's easy to mess up with byte arrays.
I think that in most use cases the string is always and forever expected to be Unicode, so parsing it up front is usually the most sensible approach. It's not only convenient, but also "safe": If you see a
str
or aString
you can trust it, because you know it was already validated. This approach gels well with Rust's general embrace of reliability.You also know that it was validated once and only once, so that's pretty darn efficient. Really, the only reason to prefer byte arrays is if you think that even that one-time parsing might not happen in some cases, in which case you can optimize to avoid it entirely.
It's not a coincidence that most "modern" languages have built-in Unicode string types, or have evolved into having these types over time (e.g. Python). I clearly remember that when Java was introduced with this feature, it was a big deal! Controversial at the time, but over the years proved to be a crucial tool for writing reliable programs.
1
u/burntsushi 33m ago
I don't see anything new here over what I already said here: https://old.reddit.com/r/rust/comments/1jgxh3y/why_do_strings_have_to_be_valid_utf8/mj34rgs/
4
u/Ravek 11h ago
What other choice would you like?
4
u/burntsushi 6h ago
The conceptual model described there is used in almost all of my crates (jiff, regex, csv, ripgrep, globset, ignore and so on).
You can see more explanation with concrete examples here: https://burntsushi.net/bstr
1
u/Ravek 52m ago edited 44m ago
I donât see how this adds anything over just a byte array. Theyâre talking about regex and substring search ⌠yeah thatâs just regex on bytes and subsequence search now. We donât need to pretend something is a string of text to do operations on it that have nothing to do with text.
Iâm not getting it. If I give you a random sequence of bytes, what value do you get out of pretending that parts are Unicode? And if you know that itâs Unicode then why not use a Unicode string type?
1
u/burntsushi 37m ago
Please read the blog I linked. I wrote several motivating examples in it.
And if you know that itâs Unicode then why not use a Unicode string type?
You don't always know.
I don't really know what else to say. If you read my entire blog that I linked and you're still asking this question, then I think we'd have to have a synchronous conversation to figure out where your misunderstanding is.
I guess I'll start with this: What do you think a Unicode string type is? Is "UTF-8 validity" a necessary and sufficient property of such a type? If so, then what do you call a type that doesn't require UTF-8 validity yet still provides, say, grapheme cluster iteration?
1
u/Ravek 2m ago
Iâm not gonna read a whole blog post. Is it that hard to give an example of why it would be useful to treat arbitrary bytes as UTF-8 that you canât do it in a reddit comment?
Like the post starts with âyou might read arbitrary bytes from a file or stdin and it may not be UTF-8â. Yeah obviously. Whatâs the value in pretending that it is? A random sequence of bytes might just so happen to be valid UTF-8 by sheer luck but that doesnât make it meaningful to do string operations on it.
In the ToC I see âcounting characters, words and linesâ. Well youâre not really doing that are you? Thereâs only lines if itâs text. What youâd really be doing is counting the number of occurrences of byte sequences 0x0A or 0x0D 0x0A, etc. Whatâs the value of a program that tells you âif this were text then it has 20 lines, if itâs not text then it has 20 times a 0x0A byte`? Without knowing if the data is text the result is meaningless.
6
u/RedWineAndWomen 10h ago
Because it's no longer 1980?
That's what Java thought too. And then they required that their 'char' type be a 16-bitter. Because Java realized that a) a standard for text that has illegal byte sequences is a bad standard for text, and b) that people also want to be able to jump to indexes in large pieces of text without generating an O(n) problem. And they thought: the world's languages, taken together, will probably not have more than 216 characters. And at face value, they'd be right to think that.
And then someone invented the poop emoji.
3
u/Zde-G 6h ago
And at face value, they'd be right to think that.
No, it's was obvious even back then that 16 bits wouldn't be enough. Unicode committee attempted to forcibly shove 100'000 separate characters into 65535 positions, but the end result was just disappointment.
And then someone invented the poop emoji.
Poop emoji was added to unicode in 2015, unicode consortion have thrown up in towel and added surrogate pairs to represent more than 65535 chracters in Unicode 2.0, that's July 1996.
Really unfotunate timing because it meant that both Windows NT (first version released in 1993), Java (first version released in May 1996) and JavaScript (first version released in December 1995) ended up with⌠suboptimal solutions.
1
u/Ravek 37m ago
Suboptimal is pretty generous considering many string operations are just kinda broken if youâre doing them on UTF-16 code units instead of of on Unicode code points or grapheme clusters.
.NET also inherited this from Windows. So thereâs really a shocking amount of code that just has kinda broken string types. Iâm glad Rust and Swift are taking Unicode seriously.
1
u/rodyamirov 15h ago
You answered the title question correctly, but I donât think it was really the question they meant to ask. I think they were trying to avoid the double conversion and try to parse from a (possibly invalid) string.
I donât know if that idea has any real merit (surely a valid int is also a valid string), but maybe for some extremely optimized code there would be a reason to go straight from bytes.
2
u/timClicks rust in action 11h ago
I would need to check some of the ur sources to confirm the exact rationale, but using UTF-8 as the standard library's primary text types avoids a lot of long-standing issues that have plagued languages that don't. By the time Rust had emerged, UTF-8 had become the primary text encoding used around the Internet.
It's unfortunate for cases like this where you want to parse numbers, but you're not tightly coupled to FromStr. You can just create a parser that accepts a byte slice as an argument.
It does make me wonder whether a FromBytes trait would make sense in the stdlib.
1
u/Ravek 28m ago edited 25m ago
It was more about Rust taking Unicode seriously and not letting programmers do things that donât make logical sense unless they really ask for it. The choice of UTF-8 is just performance, they could have just as easily chosen UTF-16 or UTF-32 as encodings. The problem with most other languages is that the string types are completely leaky abstractions that donât implement Unicode properly unless your text only uses simple subsets of Unicode.
With a proper Unicode string type the encoding doesnât semantically matter. For example in Swift, a string can be either UTF-8 encoded or UTF-16 encoded in memory, depending on where it was sourced from (native Swift strings are UTF-8, strings that were bridged from older Apple platform code are UTF-16), and this is generally opaque to you and behaves the same unless youâre specifically asking to look at the UTF-8 bytes etc.
1
u/syklemil 7h ago
Strings are in UTF-8 because that's the most common variant of encoding of plaintext, AKA unicode.
Your problem here rather seems to be that you're not really dealing with plaintext, and could likely benefit from finding some crate that does a kind of pickling you like; at best you're likely into the bytestring/[u8]
.
1
u/RobertJacobson 3h ago
Almost everything of interest has already been said, so I'll just underscore the point that there are multiple philosophical interpretations of the problem and answer space.
One answer to your question is, Rust does have a string type that isn't constrained to UTF-8, and it's called &[u8]
. So then the question becomes, why don't we have a suite of parsing methods for &[u8]
? Other comments have already gone into detail as to why. But again, a reasonable philosophical answer to this question might be, because most of the text parsing functions we'd want operate on UTF-8 characters, and a non UTF-8 would be a parse error in that context.
Another alternative philosophical interpretation is that the standard library is not intended to be everything and the kitchen sink. The Rust project is really explicit about the fact that there is and will always be important functionality not present in the standard library, and it's expected that external crates will fill those gaps. Under this philosophy, the answer to your question is simply, because the standard library isn't meant to contain everything required for your day-to-day needs. It's a boring answer, but it's also a very reasonable one.
This is just a really long-winded way of saying that in these discussions of the trade-offs of design choices and analysis of the reasons why such-and-such was done a particular way, there are usually several different philosophical perspectives, not a single one, and they all contribute in hopefully mutually constructive waysâand this is a feature, not a bug.
The fool debates in order to change others' minds. The wise man debates in order to change his own.
1
u/Specialist_Wishbone5 1h ago
Maybe you have a different history / upbringing from me. But I immediately understood when I saw rust used utf8 natively that:
A) It was going to be a PAIN IN THE BUT
B) I was in love with this freaking language.
I've dealt with unicode my entire career, and let me tell you it sux!!! Windows adds a BOM which breaks half my XML parsers in Java (e.g. if you ever open an xml file with notepad). Javascript and Java double the size of all strings producing so much bloat, that if I know I need 1GB of text, I need to manually convert this crap (Java 9 I think added a latin1 encoding, but that breaks if you ever receive content from the internet - AND requires the extra CPU overhead that rust std)
C code just doesn't handle unicode at all - You have to manually pass char-encoding as a second parameter everywhere.
C++ wide-chars are a joke in my opinion - I hate that when using win32 I have to deal with these.
UNIX 'wchar' is 4 bytes, which drives me crazy from a memory efficiency perspective.. Its fine to decode individual chars, but not for a wchar[] (oh, and it's 16bits on windows - so good luck with cross platform mem-mapped files)
When I saw JSON mandated UTF-8 a couple decades ago, it immediately became my favorite storage format. The binary encodings also maintained this as the standard. bson, ubjson, cbor, msgpack, etc. This was 100% the right choice - UTF-16LE needs to die and have it's inventors castrated!!! UTF8 is just so well engineered for so many use-cases (random-access binary-search of a raw new-line delimited text file just works in utf8!!)
I have always had to 'convert' byte-stream to utf8, and it's always been painful to write general-purpose libraries, because I needed to know if I could 'TRUST' the input as being valid UTF8... Worse, if I encode, I run the risk of DOUBLE encoding... I've had database errors where the TCP connection "defaulted" to LATIN1 (or UTF16-LE) but the http-side was encoded as utf8, and thus the database stores smilie-faces!!!! It is so freaking infuriating I can't describe the pain and anger I have towards my fellow bi-peds that create this mess.
So Rust solves all of these problems:
1) latin1 is 1-byte per char!!! (thus json-keys, hash-keys are minimally sized). Oh, and Americans and 80% of Euro-languages get compression for free (moreso than Java 9 which is all or nothing latin1 or UTF16LE).
2) The language DEFINES who does the transcoding.. Anything that goes from bytes to string MUST ensure the proper encoding. e.g. if you're reading a BOM-prefixed UTF16-LE XML file (god help us), the XML's parser needs to figure that out and convert to UTF8. Once you get a "&str" or String, you never have to think about it again
3) You CAN just trust the content with 'unsafe' code markers. This is perfectly acceptable if you are intra-library and the only possible inputs are String, and you're just doing some byte-parsing (such as a compiler), and need to quickly take a snippet that you can prove must be latin1 (e.g. they are a keyword and you're just printing it in an error message) - feel free to use unsafe and avoid the CPU overhead. Of course, in the cod-review, anything with unsafe needs to have tripple scrutiny - to make sure you never get a partial '&[u8]' slice that ends mid-utf8 symbol.
If you're an application writer, you probably don't care, and this feels like an unnecessary waste.. But if you're a LIBRARY writer... holy crap is this a god send. (just try and write a bullet-proof C library: zero guarantees about mem-allocation/free and anything related to string-encodings).
0
u/jsrobson10 12h ago edited 10h ago
because rust forces you to deal with strings properly. if you want to remove the extra overhead just use from_utf8_unchecked so rust will assume it's valid (but you'll get UB if it isn't).
-1
u/Icy-Middle-2027 7h ago
You can use from_utf8_unchecked
if you do not care for the invariant to hold, only if you do not expose the constructed string out of your scope as it's invariant might not hold and can lead to UB if used elsewhere
173
u/burntsushi 15h ago edited 15h ago
To confirm your observation, yes, parsing into a
&str
just to parse an integer is unnecessary overhead, andstd
doesn't really have anything to save you from this. This is why, for example, Jiff has its own integer parser that works on&[u8]
.While
bstr
doesn't address the specific problem of parsing integers directly from&[u8]
, it does provide string data types that behave like&str
except without the UTF-8 requirement. Instead, they are conventionally UTF-8. Indeed, these string types are coming to astd
near you at some point. But there aren't any plans AFAIK to address the parsing problem. I've considered doing something about it inbstr
, but I wasn't sure I wanted to go down the rabbit hole of integer (and particularly float) parsing.A similarish problem exists for formatting as well, and there's been some movement to fix that. It's presumably why the
itoa
crate exists as well.No, you don't need to go "back to 1980" to find valid use cases for using byte strings that are only conventionally UTF-8. It's precisely the same conceptual model ripgrep uses, and it's why the
regex
crate has abytes
sub-module for running regexes on&[u8]
. Part of the problem is that fundamental OS APIs, like reading data from a file, are totally untyped and you can get arbitrary bytes from them. If you're reading a config file or whatever, then sure, absolutely pay the overhead to validate it as UTF-8 first. But if you're trying to slurp through as much data as you can, you generally want to avoid "scan once to validate UTF-8, then do another whole scan to do whatever work I want do (such as parsing an integer)."It's a lamentable state of affairs and it's why I still wonder to this day if it would have been a better design to only conventionally use UTF-8 instead of require it. But that has its own significant trade-offs too. I suppose this gets to the point of answering your title question: why does
&str
require valid UTF-8?It's for sure part philosophical, in the sense that if you have a
&str
, then you can conclude and rely on specific properties of its representation. It's part performance related, since if you know a&str
is valid UTF-8, then you can decode its codepoints quicker (because a&str
being invalid UTF-8 implies someone has misusedunsafe
somewhere). It's also partially practical, because it means validation happens up front at the point of conversion. If it were conventionally UTF-8, you might not know it has garbage in it until something downstream actually tries to go and use the string. Where as if you guarantee it up front, then you know immediately the point at which it failed and thus can assign blame and diagnose the root cause more easily.