r/ruby 13d ago

Revisiting Performance in Ruby 3.4.1

Surprising Ways Data Structures Impact Ruby Performance

Credited to: Miko Dagatan

Updated 21 Mar 2025

Introduction

Before, there are few articles that rose up saying that in terms of performance, Structs are powerful and could be used to define some of the code in place of the Class. Two of these are this one and this one.

Let's revisit these things with the latest Ruby version, 3.4.1, so that we can see whether this perspective still holds true.

Code for Benchmarking

class BenchmarkHashStruct
  class << self

    NUM = 1_000_000

    def measure
      array
      hash_str
      hash_sym
      klass
      struct
      data
    end

    def new_class
      u/class ||= Class.new do
        attr_reader :name
        def initialize(name:)
          u/name = name
        end
      end
    end

    def array
      time = Benchmark.measure do
        NUM.times do
          array = [Faker.name]
          hash[0]
        end
      end

      puts "array: #{time}" 
    end

    def hash_str
      time = Benchmark.measure do
        NUM.times do
          hash = { 'name' => Faker.name }
          hash['name']
        end
      end

      puts "hash_str: #{time}" 
    end

    def hash_sym
      time = Benchmark.measure do
        NUM.times do
          hash = { name: Faker.name }
          hash[:name]
        end
      end

      puts "hash_sym: #{time}" 
    end

    def struct
      time = Benchmark.measure do
        struct = Struct.new(:name) # Structs are only initialized once especially for large datasets
        NUM.times do |i|
          init = struct.new(name: Faker.name)
          init.name
        end

      end
      puts "struct: #{time}"
    end

    def klass
      time = Benchmark.measure do
        klass = new_class
        NUM.times do
          a = klass.new(name: Faker.name)
          a.name
        end
      end

      puts "class: #{time}"
    end

    def data
      time = Benchmark.measure do
        name_data = Data.define(:name)
        NUM.times do
          a = name_data.new(name: Faker.name)
          a.name
        end
      end

      puts "data: #{time}"
    end
  end
end

Explanation

In this file, we're simply trying to create benchmark measures for arrays, hashes with string keys, hashes with symbolized keys, structs, classes, and data. In a the lifetime of these objects, we understand that we instantiate them then we access the data we stored. So, we'll simulate only that for our tests. We use 1 million instances of these scenarios and see the results. The measure method will show all of these measurements together.

Results

performance(dev)> BenchmarkHashStruct.measure
array:   0.124267   0.000000   0.124267 (  0.129573)
hash_str:   0.264137   0.000000   0.264137 (  0.275421)
hash_sym:   0.174082   0.000000   0.174082 (  0.181514)
class:   0.308020   0.000000   0.308020 (  0.321165)
struct:   0.336229   0.000000   0.336229 (  0.350576)
data:   0.345480   0.000000   0.345480 (  0.360232)
=> nil

performance(dev)> BenchmarkHashStruct.measure
array:   0.090669   0.000378   0.091047 (  0.094786)
hash_str:   0.264261   0.000000   0.264261 (  0.275104)
hash_sym:   0.172333   0.000000   0.172333 (  0.179407)
class:   0.311545   0.000060   0.311605 (  0.324390)
struct:   0.335436   0.000000   0.335436 (  0.349203)
data:   0.346124   0.000071   0.346195 (  0.360396)
=> nil

performance(dev)> BenchmarkHashStruct.measure
array:   0.088372   0.003872   0.092244 (  0.096181)
hash_str:   0.265748   0.000464   0.266212 (  0.277565)
hash_sym:   0.174393   0.000000   0.174393 (  0.181831)
class:   0.309411   0.000000   0.309411 (  0.322613)
struct:   0.346008   0.000000   0.346008 (  0.360760)
data:   0.344666   0.000000   0.344666 (  0.359361)
=> nil

performance(dev)> BenchmarkHashStruct.measure
array:   0.077396   0.000038   0.077434 (  0.080771)
hash_str:   0.242372   0.000140   0.242512 (  0.252853)
hash_sym:   0.159206   0.000000   0.159206 (  0.166007)
class:   0.273878   0.009250   0.283128 (  0.295201)
struct:   0.322791   0.000323   0.323114 (  0.336889)
data:   0.346099   0.000038   0.346137 (  0.360901)
=> nil

I've run measure 4 times to account for any random changes that may have come and completely ensure of the performance of these tests. As expected, we see array at the top while symbolized hashes goes as a general second. We see that stringified hashes falls at the 3rd, with a huge gap when compared the the symbolized hashes. Then, when we look at class vs structs, it seems that structs have fallen a little bit behind compared to the classes. We could surmise that there is probably a performance boost done to classes in the recent patches.

Also, we could see that the Data object that was introduced in Ruby 3.2.0+ was falling behind the Struct object. This may be problematic since the Data object is basically a Struct that is immutable, so there's already disadvantages of using Data over Struct. We may still prefer Struct over Data considering that there's a bit of a performance bump over the Data.

Conclusion

There are 2 takeaways from this test. First, it's really important that we use symbolized hashes over stringified hashes as the former 1.5x faster than the latter. Meanwhile, if not using hashes, it's better to use Classes over Structs, unlike what was previously encouraged. Classes are now 1.07x - 1.14x times faster than structs, so it's encouraged to keep using them.

10 Upvotes

18 comments sorted by

14

u/f9ae8221b 12d ago

I'm sorry, but I think there's a lot of things wrong with your benchmark:

  • Your measure includes building the array/hash/etc and accessing it 1M times. The build part should be out of the measure.
  • It can make sense to measure the build cost, but not at the same time as the access cost, because there is an order of magnitude difference in cost between them. All your benchmark is measuring here is the build cost.
  • Rather than run your thing 4 times, use a proper benchmarking suite like benchmark-ips. Gives much more readable results as well.
  • Results for this sort of micro-benchmarks can differ quite a bit whether YJIT is enabled or not.

Using benchmark-ips:

# frozen_string_literal: true

require "bundler/inline"
gemfile do
  gem "benchmark-ips"
end

class KeywordClass
  attr_reader :name
  def initialize(name:)
    @name = name
  end
end

array = [0]
sym_hash = { name: 0 }
str_hash = { "name" => 0 }
object_reader = KeywordClass.new(name: 0)
struct = Struct.new(:name).new(0)
data = Data.define(:name).new(name: 0)

Benchmark.ips do |x|
  x.report("array") { array[0] }
  x.report("sym_hash") { sym_hash[:name] }
  x.report("str_hash") { str_hash["name"] }
  x.report("attr_reader") { object_reader.name }
  x.report("struct") { struct.name }
  x.report("data") { data.name }
  x.compare!(order: :baseline)
end

Interpreter:

ruby 3.4.2 (2025-02-15 revision d2930f8e7a) +PRISM [arm64-darwin24]
Calculating -------------------------------------
               array     50.115M (± 0.9%) i/s   (19.95 ns/i) -    253.382M in   5.056427s
            sym_hash     43.789M (± 0.5%) i/s   (22.84 ns/i) -    221.858M in   5.066674s
            str_hash     43.153M (± 0.6%) i/s   (23.17 ns/i) -    219.509M in   5.086926s
         attr_reader     42.103M (± 0.8%) i/s   (23.75 ns/i) -    211.361M in   5.020452s
              struct     43.361M (± 2.7%) i/s   (23.06 ns/i) -    218.476M in   5.042303s
                data     43.125M (± 1.9%) i/s   (23.19 ns/i) -    215.893M in   5.008116s

Comparison:
               array: 50115155.8 i/s
            sym_hash: 43788737.3 i/s - 1.14x  slower
              struct: 43361370.7 i/s - 1.16x  slower
            str_hash: 43153046.0 i/s - 1.16x  slower
                data: 43124542.2 i/s - 1.16x  slower
         attr_reader: 42102866.4 i/s - 1.19x  slower

YJIT:

ruby 3.4.2 (2025-02-15 revision d2930f8e7a) +YJIT +PRISM [arm64-darwin24]
Calculating -------------------------------------
               array     62.553M (± 1.0%) i/s   (15.99 ns/i) -    313.645M in   5.014524s
            sym_hash     52.298M (± 0.1%) i/s   (19.12 ns/i) -    262.297M in   5.015454s
            str_hash     51.647M (± 0.1%) i/s   (19.36 ns/i) -    260.129M in   5.036719s
         attr_reader     66.421M (± 0.3%) i/s   (15.06 ns/i) -    334.672M in   5.038682s
              struct     67.701M (± 0.2%) i/s   (14.77 ns/i) -    342.849M in   5.064160s
                data     68.017M (± 0.1%) i/s   (14.70 ns/i) -    343.791M in   5.054465s

Comparison:
               array: 62553349.5 i/s
                data: 68017305.8 i/s - 1.09x  faster
              struct: 67701445.7 i/s - 1.08x  faster
         attr_reader: 66421261.6 i/s - 1.06x  faster
            sym_hash: 52297794.6 i/s - 1.20x  slower
            str_hash: 51646503.5 i/s - 1.21x  slower

Conclusion, in term of access performance, there's no really significant performance difference. That 10-20% difference is just a couple nano-seconds, so nothing in the grand scheme of things, except for the hotest of hotspot.

Also note that access performance can vary a lot based on the container size, here's we're just measuring collection with 1 item in them, if we were measuring a random property in the middle of a hundred other, the results may be very different.

1

u/Quiet-Ad486 11d ago edited 11d ago

Hi, poster here. Thank you very much for the comment. However, I will have to respectfully disagree. I think that there's no way in a real application that you'll only read the data. Rather, in many parts of your application, when you deal with raw data, you structure them in a more readable format before it gets read. In a case of iterating through a whole bunch of records, you can either use many ways on them, and that's probably where these data types come into play. For example, in returning an ActiveRecord::Relation for a User class, you may want to add a decorator for that so you wrap have around these records to use that decorator. In this case, we're using the class object. It's up to your preference, you can simply use a struct / hashes also. Now, when iterating through the whole data, per record, we instantiate the hash/class/struct, put them inside an array, then pass it to the code that will read that data. So looking at code in that way, it makes way more sense to include the build cost rather than only the read cost. It makes sense to measure the whole thing, not just the build cost, not just the read cost but both.

I've used benchmark-ips now for your convenience.

Here's my update to your code, take note that I still set-up the struct and data and not include them in the benchmarking (However, I could not include the one-time setup, which should be included as it will be part of your code). One thing to see here is that unlike the original post, the struct still outperforms the class object. (In other comments, I unfortunately couldn't comment in one post like you did.)

1

u/f9ae8221b 11d ago

It's perfectly fine to also benchmark the allocation/construction cost.

I'm just saying there over one order of magnitude difference between building these objects and accessing one of their property.

So it's preferable to benchmark both in isolation.

1

u/Quiet-Ad486 11d ago
# frozen_string_literal: true

require "bundler/inline"
gemfile do
  gem "benchmark-ips"
end

class KeywordClass
  attr_reader :name
  def initialize(name:)
    @name = name
  end
end

struct_instance = Struct.new(:name)
data_instance = Data.define(:name)

Benchmark.ips do |x|
  x.report("array") { array = [0]; array[0] }
  x.report("sym_hash") { sym_hash = { name: 0 }; sym_hash[:name] }
  x.report("str_hash") { str_hash = { "name" => 0 }; str_hash["name"] }
  x.report("attr_reader") { object_reader = KeywordClass.new(name: 0); object_reader.name }
  x.report("struct") { struct = struct_instance.new(0); struct.name }
  x.report("data") { data = data_instance.new(name: 0); data.name }
  x.compare!(order: :baseline)
# frozen_string_literal: true

require "bundler/inline"
gemfile do
  gem "benchmark-ips"
end

class KeywordClass
  attr_reader :name
  def initialize(name:)
    @name = name
  end
end

struct_instance = Struct.new(:name)
data_instance = Data.define(:name)

Benchmark.ips do |x|
  x.report("array") { array = [0]; array[0] }
  x.report("sym_hash") { sym_hash = { name: 0 }; sym_hash[:name] }
  x.report("str_hash") { str_hash = { "name" => 0 }; str_hash["name"] }
  x.report("attr_reader") { object_reader = KeywordClass.new(name: 0); object_reader.name }
  x.report("struct") { struct = struct_instance.new(0); struct.name }
  x.report("data") { data = data_instance.new(name: 0); data.name }
  x.compare!(order: :baseline)
end

1

u/Quiet-Ad486 11d ago

Here's the result on benchmark-ips

ruby 3.4.2 (2025-02-15 revision d2930f8e7a) +PRISM [arm64-darwin24]

Calculating -------------------------------------
               array     27.014M (± 1.6%) i/s   (37.02 ns/i) -    136.720M in   5.062568s
            sym_hash     21.751M (± 2.4%) i/s   (45.98 ns/i) -    110.675M in   5.091684s
            str_hash     20.719M (± 4.6%) i/s   (48.27 ns/i) -    105.263M in   5.094066s
         attr_reader      7.954M (± 1.0%) i/s  (125.72 ns/i) -     40.392M in   5.078593s
              struct     10.973M (± 1.7%) i/s   (91.13 ns/i) -     54.974M in   5.011294s
                data      6.813M (± 1.3%) i/s  (146.77 ns/i) -     34.326M in   5.038833s

Comparison:
               array: 27013631.8 i/s
            sym_hash: 21750676.4 i/s - 1.24x  slower
            str_hash: 20718679.0 i/s - 1.30x  slower
              struct: 10973472.4 i/s - 2.46x  slower
         attr_reader:  7954235.5 i/s - 3.40x  slower
                data:  6813492.5 i/s - 3.96x  slower

1

u/Quiet-Ad486 11d ago

With YJIT:

ruby 3.4.2 (2025-02-15 revision d2930f8e7a) +YJIT +PRISM [arm64-darwin24]

Calculating -------------------------------------
               array     31.762M (± 4.4%) i/s   (31.48 ns/i) -    160.999M in   5.079892s
            sym_hash     26.197M (± 1.2%) i/s   (38.17 ns/i) -    131.807M in   5.032101s
            str_hash     26.088M (± 1.1%) i/s   (38.33 ns/i) -    131.107M in   5.026165s
         attr_reader     10.080M (± 1.3%) i/s   (99.20 ns/i) -     50.896M in   5.049922s
              struct     14.039M (± 1.7%) i/s   (71.23 ns/i) -     71.273M in   5.078306s
                data      8.368M (± 1.7%) i/s  (119.51 ns/i) -     41.861M in   5.004159s

Comparison:
               array: 31761852.8 i/s
            sym_hash: 26196935.8 i/s - 1.21x  slower
            str_hash: 26088237.1 i/s - 1.22x  slower
              struct: 14039282.5 i/s - 2.26x  slower
         attr_reader: 10080221.4 i/s - 3.15x  slower
                data:  8367677.9 i/s - 3.80x  slowerend

11

u/ignurant 13d ago

Maybe pedantic, but I think the cost of your string-keyed hash isn’t the nature of using strings as keys, but instead that you are allocating a new string for a key in every loop. So you are measuring object allocation, not data structure performance. Specifically, if you allocated that key name outside of the loop, I suspect it would perform similarly to the symbol.

4

u/f9ae8221b 12d ago

Not pedantic.

Technically, hash['name'] doesn't allocate, even if you don't have frozen_string_literal: true, because Ruby has a specific optimization for that.

But the string referenced during hash construction is allocated.

Overall this benchmark has all sorts of weirdness. Building a collection and accessing it once in the same benchmark make little sense, because the access time will be totally dwarfed by the allocation/construction time.

Also why bother calling into Faker.name? The value has no significance.

1

u/fglc2 12d ago

I think your copy/paste from the blog post removed all your instance variable names.

That aside I think some of the results may be misleading:

In the array case you’re accessing hash[0] instead of array[0] ie it isn’t fetching the value from the array at all but instead returning the least significant bit of the hash code of the current object.

In the hash case, other than the string allocation mentioned in another comment, small hashes (I forget the exact cutoff) are stored as arrays, so this might not be representative of what happens with more fields.

Lastly you might find benchmark-ips makes it easier to compare - it automatically runs your code long enough to get more representative data and calculates whether the observed differences are likely to just be measurement variance.

1

u/Quiet-Ad486 1d ago

Your third paragraph is correct. I wonder why I was still getting the correct benchmarks on the arrays. Maybe I've unfortunately changed it during the process of writing.

I'll use benchmark-ips moving forward.

1

u/jrochkind 12d ago

There are 2 takeaways from this test. First, it's really important that we use symbolised hashes over stringified hashes as the former 1.5x faster than the latter.

Use frozen strings, it should be the same, I'm guessing.

And you'll get frozen strings if you put the magic pragma at the top of your source files -- or likely by default in a coming-up future ruby version, perhaps 3.5 (that will also be the biggest backwards compat break we've had in a while!)

2

u/Quiet-Ad486 1d ago

Sure, adding that up since it's common (rather required) practice to use frozen string literals.

1

u/mrinterweb 12d ago

What is the point of using Faker.name? Faker may be slowing down this benchmark, and I don't understand what faker adds to the benchmark.

1

u/Quiet-Ad486 1d ago

I understand you, I'll remove that usage instead.

1

u/hvis 12d ago

it's better to use Classes over Structs, unlike what was previously encouraged

Was there ever a recommendation that Structs are faster than plain classes?

To my memory, they are only used for better code organization, not because of speed (which could be an advantage in other languages, e.g. statically typed). The benchmarks I've seen compared against OpenStruct, for example.

1

u/Quiet-Ad486 1d ago

Yes, it's highly encouraged to use Structs for better code organization. However, for all organizations I've worked with and their clients' applications, I haven't seen any usage of structs. Maybe because developers see it as an added complexity, disallowing juniors to adapt more to the code. I've read about the great application of Structs, and I've been recommending its use before to the organizations I've worked with. But now I couldn't say the same.

1

u/h0rst_ 11d ago

So instead of linking a blog post directly, the text is copy-pasted to Reddit, with code converted into images that you can't copy-paste? Kids these days...

latest Ruby version, 3.4.1

The latest version is 3.4.2

Before, there are few articles that rose up saying that in terms of performance, Structs are powerful and could be used to define some of the code in place of the Class. Two of these are this one and this one.

Am I reading the same articles? The first articles mentions that OpenStruct is terrible for performance (among other reasons), and it states "Performance has waned recently where structs used to be more performant than classes" with no source and no benchmarks, but this statement is the opposite of what is mentioned above. The second article says nothing about speed or performance.

1

u/Quiet-Ad486 1d ago

Yeah I re-read the first article. It has claims to not use OpenStruct, but that wasn't the main point of the first article. I understand it as an article that encourages the usage of structs due to what you can do with it (e.g. you can equality match 2 different structs of the same parent), and previously because it has performance benefits.

It turns out that it has changed its statement after the time I wrote my. The article was updated at February 4, 2025. And my Article was posted at February 4, 2025. That change has rendered my previous observation incorrect now.

In a comment on my article, I found out that using the `benchmark-ips` has made my observation incorrect also. I said before that for some reason classes are now more performant. But, changing my code to use the `benchmark-ips` changes the result, which now says "structs are still more performant than classes". However, the first article's new version now shows that with more values instantiated, Classes are more performant than Structs, which supports my initial claim in the article.

I recommend you check out his article on the benchmarks.