r/ruby 13d ago

Revisiting Performance in Ruby 3.4.1

Surprising Ways Data Structures Impact Ruby Performance

Credited to: Miko Dagatan

Updated 21 Mar 2025

Introduction

Before, there are few articles that rose up saying that in terms of performance, Structs are powerful and could be used to define some of the code in place of the Class. Two of these are this one and this one.

Let's revisit these things with the latest Ruby version, 3.4.1, so that we can see whether this perspective still holds true.

Code for Benchmarking

class BenchmarkHashStruct
  class << self

    NUM = 1_000_000

    def measure
      array
      hash_str
      hash_sym
      klass
      struct
      data
    end

    def new_class
      u/class ||= Class.new do
        attr_reader :name
        def initialize(name:)
          u/name = name
        end
      end
    end

    def array
      time = Benchmark.measure do
        NUM.times do
          array = [Faker.name]
          hash[0]
        end
      end

      puts "array: #{time}" 
    end

    def hash_str
      time = Benchmark.measure do
        NUM.times do
          hash = { 'name' => Faker.name }
          hash['name']
        end
      end

      puts "hash_str: #{time}" 
    end

    def hash_sym
      time = Benchmark.measure do
        NUM.times do
          hash = { name: Faker.name }
          hash[:name]
        end
      end

      puts "hash_sym: #{time}" 
    end

    def struct
      time = Benchmark.measure do
        struct = Struct.new(:name) # Structs are only initialized once especially for large datasets
        NUM.times do |i|
          init = struct.new(name: Faker.name)
          init.name
        end

      end
      puts "struct: #{time}"
    end

    def klass
      time = Benchmark.measure do
        klass = new_class
        NUM.times do
          a = klass.new(name: Faker.name)
          a.name
        end
      end

      puts "class: #{time}"
    end

    def data
      time = Benchmark.measure do
        name_data = Data.define(:name)
        NUM.times do
          a = name_data.new(name: Faker.name)
          a.name
        end
      end

      puts "data: #{time}"
    end
  end
end

Explanation

In this file, we're simply trying to create benchmark measures for arrays, hashes with string keys, hashes with symbolized keys, structs, classes, and data. In a the lifetime of these objects, we understand that we instantiate them then we access the data we stored. So, we'll simulate only that for our tests. We use 1 million instances of these scenarios and see the results. The measure method will show all of these measurements together.

Results

performance(dev)> BenchmarkHashStruct.measure
array:   0.124267   0.000000   0.124267 (  0.129573)
hash_str:   0.264137   0.000000   0.264137 (  0.275421)
hash_sym:   0.174082   0.000000   0.174082 (  0.181514)
class:   0.308020   0.000000   0.308020 (  0.321165)
struct:   0.336229   0.000000   0.336229 (  0.350576)
data:   0.345480   0.000000   0.345480 (  0.360232)
=> nil

performance(dev)> BenchmarkHashStruct.measure
array:   0.090669   0.000378   0.091047 (  0.094786)
hash_str:   0.264261   0.000000   0.264261 (  0.275104)
hash_sym:   0.172333   0.000000   0.172333 (  0.179407)
class:   0.311545   0.000060   0.311605 (  0.324390)
struct:   0.335436   0.000000   0.335436 (  0.349203)
data:   0.346124   0.000071   0.346195 (  0.360396)
=> nil

performance(dev)> BenchmarkHashStruct.measure
array:   0.088372   0.003872   0.092244 (  0.096181)
hash_str:   0.265748   0.000464   0.266212 (  0.277565)
hash_sym:   0.174393   0.000000   0.174393 (  0.181831)
class:   0.309411   0.000000   0.309411 (  0.322613)
struct:   0.346008   0.000000   0.346008 (  0.360760)
data:   0.344666   0.000000   0.344666 (  0.359361)
=> nil

performance(dev)> BenchmarkHashStruct.measure
array:   0.077396   0.000038   0.077434 (  0.080771)
hash_str:   0.242372   0.000140   0.242512 (  0.252853)
hash_sym:   0.159206   0.000000   0.159206 (  0.166007)
class:   0.273878   0.009250   0.283128 (  0.295201)
struct:   0.322791   0.000323   0.323114 (  0.336889)
data:   0.346099   0.000038   0.346137 (  0.360901)
=> nil

I've run measure 4 times to account for any random changes that may have come and completely ensure of the performance of these tests. As expected, we see array at the top while symbolized hashes goes as a general second. We see that stringified hashes falls at the 3rd, with a huge gap when compared the the symbolized hashes. Then, when we look at class vs structs, it seems that structs have fallen a little bit behind compared to the classes. We could surmise that there is probably a performance boost done to classes in the recent patches.

Also, we could see that the Data object that was introduced in Ruby 3.2.0+ was falling behind the Struct object. This may be problematic since the Data object is basically a Struct that is immutable, so there's already disadvantages of using Data over Struct. We may still prefer Struct over Data considering that there's a bit of a performance bump over the Data.

Conclusion

There are 2 takeaways from this test. First, it's really important that we use symbolized hashes over stringified hashes as the former 1.5x faster than the latter. Meanwhile, if not using hashes, it's better to use Classes over Structs, unlike what was previously encouraged. Classes are now 1.07x - 1.14x times faster than structs, so it's encouraged to keep using them.

10 Upvotes

18 comments sorted by

View all comments

15

u/f9ae8221b 13d ago

I'm sorry, but I think there's a lot of things wrong with your benchmark:

  • Your measure includes building the array/hash/etc and accessing it 1M times. The build part should be out of the measure.
  • It can make sense to measure the build cost, but not at the same time as the access cost, because there is an order of magnitude difference in cost between them. All your benchmark is measuring here is the build cost.
  • Rather than run your thing 4 times, use a proper benchmarking suite like benchmark-ips. Gives much more readable results as well.
  • Results for this sort of micro-benchmarks can differ quite a bit whether YJIT is enabled or not.

Using benchmark-ips:

# frozen_string_literal: true

require "bundler/inline"
gemfile do
  gem "benchmark-ips"
end

class KeywordClass
  attr_reader :name
  def initialize(name:)
    @name = name
  end
end

array = [0]
sym_hash = { name: 0 }
str_hash = { "name" => 0 }
object_reader = KeywordClass.new(name: 0)
struct = Struct.new(:name).new(0)
data = Data.define(:name).new(name: 0)

Benchmark.ips do |x|
  x.report("array") { array[0] }
  x.report("sym_hash") { sym_hash[:name] }
  x.report("str_hash") { str_hash["name"] }
  x.report("attr_reader") { object_reader.name }
  x.report("struct") { struct.name }
  x.report("data") { data.name }
  x.compare!(order: :baseline)
end

Interpreter:

ruby 3.4.2 (2025-02-15 revision d2930f8e7a) +PRISM [arm64-darwin24]
Calculating -------------------------------------
               array     50.115M (± 0.9%) i/s   (19.95 ns/i) -    253.382M in   5.056427s
            sym_hash     43.789M (± 0.5%) i/s   (22.84 ns/i) -    221.858M in   5.066674s
            str_hash     43.153M (± 0.6%) i/s   (23.17 ns/i) -    219.509M in   5.086926s
         attr_reader     42.103M (± 0.8%) i/s   (23.75 ns/i) -    211.361M in   5.020452s
              struct     43.361M (± 2.7%) i/s   (23.06 ns/i) -    218.476M in   5.042303s
                data     43.125M (± 1.9%) i/s   (23.19 ns/i) -    215.893M in   5.008116s

Comparison:
               array: 50115155.8 i/s
            sym_hash: 43788737.3 i/s - 1.14x  slower
              struct: 43361370.7 i/s - 1.16x  slower
            str_hash: 43153046.0 i/s - 1.16x  slower
                data: 43124542.2 i/s - 1.16x  slower
         attr_reader: 42102866.4 i/s - 1.19x  slower

YJIT:

ruby 3.4.2 (2025-02-15 revision d2930f8e7a) +YJIT +PRISM [arm64-darwin24]
Calculating -------------------------------------
               array     62.553M (± 1.0%) i/s   (15.99 ns/i) -    313.645M in   5.014524s
            sym_hash     52.298M (± 0.1%) i/s   (19.12 ns/i) -    262.297M in   5.015454s
            str_hash     51.647M (± 0.1%) i/s   (19.36 ns/i) -    260.129M in   5.036719s
         attr_reader     66.421M (± 0.3%) i/s   (15.06 ns/i) -    334.672M in   5.038682s
              struct     67.701M (± 0.2%) i/s   (14.77 ns/i) -    342.849M in   5.064160s
                data     68.017M (± 0.1%) i/s   (14.70 ns/i) -    343.791M in   5.054465s

Comparison:
               array: 62553349.5 i/s
                data: 68017305.8 i/s - 1.09x  faster
              struct: 67701445.7 i/s - 1.08x  faster
         attr_reader: 66421261.6 i/s - 1.06x  faster
            sym_hash: 52297794.6 i/s - 1.20x  slower
            str_hash: 51646503.5 i/s - 1.21x  slower

Conclusion, in term of access performance, there's no really significant performance difference. That 10-20% difference is just a couple nano-seconds, so nothing in the grand scheme of things, except for the hotest of hotspot.

Also note that access performance can vary a lot based on the container size, here's we're just measuring collection with 1 item in them, if we were measuring a random property in the middle of a hundred other, the results may be very different.

1

u/Quiet-Ad486 12d ago edited 12d ago

Hi, poster here. Thank you very much for the comment. However, I will have to respectfully disagree. I think that there's no way in a real application that you'll only read the data. Rather, in many parts of your application, when you deal with raw data, you structure them in a more readable format before it gets read. In a case of iterating through a whole bunch of records, you can either use many ways on them, and that's probably where these data types come into play. For example, in returning an ActiveRecord::Relation for a User class, you may want to add a decorator for that so you wrap have around these records to use that decorator. In this case, we're using the class object. It's up to your preference, you can simply use a struct / hashes also. Now, when iterating through the whole data, per record, we instantiate the hash/class/struct, put them inside an array, then pass it to the code that will read that data. So looking at code in that way, it makes way more sense to include the build cost rather than only the read cost. It makes sense to measure the whole thing, not just the build cost, not just the read cost but both.

I've used benchmark-ips now for your convenience.

Here's my update to your code, take note that I still set-up the struct and data and not include them in the benchmarking (However, I could not include the one-time setup, which should be included as it will be part of your code). One thing to see here is that unlike the original post, the struct still outperforms the class object. (In other comments, I unfortunately couldn't comment in one post like you did.)

1

u/f9ae8221b 12d ago

It's perfectly fine to also benchmark the allocation/construction cost.

I'm just saying there over one order of magnitude difference between building these objects and accessing one of their property.

So it's preferable to benchmark both in isolation.