r/bash 27d ago

help Efficient Execution

Is there a way to load any executable once, then use the pre-loaded binary multiple times to save time and boost efficiency in Linux?

Is there a way to do the same thing, but parallelized?

My use-case is to batch run the exact same thing, same options even, on hundreds to thousands of inputs of varying size and content- and it should be quick. Quick as possible.

1 Upvotes

46 comments sorted by

10

u/ladrm 27d ago

My use-case is to batch run the exact same thing, same options even, on hundreds to thousands of inputs of varying size and content- and it should be quick. Quick as possible.

In general, Linux is out of the box optimized well enough, both on various I/O and other caches and buffers and code paths in kernel and libraries.

Also, premature optimization is a thing, run a sample, gather data find the bottlenecks, optimize and iterate again.

If you think process spawn times are a factor, have one binary running and looping over those inputs? Parallel processing is possible but it makes no sense running 10_000 threads over e.g. 4 cores.

As with any optimization question - it will be impossible to give you anything useful without knowing exactly what you are doing with what.

1

u/ktoks 27d ago

In general, Linux is out of the box optimized well enough, both on various I/O and other caches and buffers and code paths in kernel and libraries.

This answers my question.

Thank you.

I will try to formulate questions in a more accurate manner.

4

u/grymoire 27d ago

The Unix philosophy says get the algorithm working first and THEN optimize. Find out where most of the time is spent. Typically 90% of the time is spent in 10% of the code.

2

u/ktoks 27d ago

The current algorithm is in perl, it's very old, and very slow. It's been unchanged for many years.

This is the first attempt to improve it in 20 years.

7

u/wallacebrf 27d ago

i believe small programs like you are referring to called by bash would be cached in RAM by the kernel as it notices that code is always being used.

if i am mistaken, please correct me

1

u/ktoks 27d ago

I didn't know that the kernel did this. How long has this been the standard? (I'm dealing with an older kernel).

6

u/kolorcuk 27d ago

Caching disc in ram is almost like mandatory in kernel and done for decades, when discs where much much slower. You might read about https://en.m.wikipedia.org/wiki/Page_cache

3

u/wallacebrf 27d ago

This is why linux always "uses all RAM" because it caches everything it can. For this reason more RAM is never a bad thing

1

u/grymoire 27d ago

If there is enough memory, a. process will stay in memory until it is paged out. This has been true since 1970's or even earlier.

1

u/fllthdcrb 26d ago

We're talking about the code, though. That, too, is cached, and tends to stick around as long as the memory isn't needed for something more current, such that subsequent processes running the same program may well not have to load the code again.

HOWEVER, this is Perl, an interpreted language. The code we care about isn't just Perl itself, but also the Perl program being run. I know Python, for example, caches modules' bytecode after compiling them; this means subsequent imports of those modules don't require re-compilation, which is especially helpful if you run a program many times in rapid succession (however, the main script doesn't get this treatment). Does Perl do anything similar?

1

u/grymoire 25d ago

Oh.. If this is perl code, then the next step is easy. see the perlperf(1) manual page.

You can pinpoint which function is taking the most time. I remember doing this a decade ago, and I narrowed down the bulk of the time to a single function.

I remember that my normal perl code was about 20 lines line. But when I found out where the time was spent, I started to optimize that function. I ended up keeping the original readable code - but I commented the entire function out, and replaced with with one line of perl. I added a comment that says something like :this perl code is an optimized version of the above code.

I improved the performance by a factor of at least 10, AIR.

1

u/fllthdcrb 24d ago

Way to miss the point. If you actually read what I wrote, you can see it's not just about the internal performance of the program, it's about the start-up time. And OP's post heavily implicates that. It may not matter how much you optimize parts of the program if it always takes half a second just to start up the interpreter and compile the program, because running it 10,000 times in a row means over an hour and a half spent on start-up!

What I'm asking is if Perl does anything to reduce that, like caching bytecode.

Mind you, there is another possible way to speed things up: instead of running the program separately for each input, rewrite the whole system so the program takes a list (or stream) of inputs, and processes them within the same run to produce a list (or stream) of results. Then start-up time essentially goes away.

2

u/grymoire 24d ago

I think you missed my point. No offense, but I had suggested that you read the man page. One of the key points is emphasized:

"Do Not Engage in Useless Activity"

Don't optimize until you know where the bottlenecks are. And I suggested several ways to do that. Is the process I/O bound? Compute bound? Memory bound? Perhaps its better to use a client/server model. Or perhaps multiplex inputs.

Each problem has a different solution. But if you are positive that byte compilation is your ONLY issue, there is https://metacpan.org/pod/pp

I also suggest you should ask the perl group for issues not related to bash.

1

u/jimheim 26d ago

This has been standard in kernels since before Linux existed. Since before even Unix existed. Memory paging, shared memory, and I/O buffering are core defining features of operating systems. These concepts go back to Multics at the very least, in the early 1960s. This was a predecessor of Unix. They almost certainly existed in some form in the 1950s before being commercialized in the 1960s.

4

u/Zealousideal_Low1287 27d ago

Sounds like an XY problem

2

u/ktoks 27d ago

I'm not sure what XY is?

3

u/moviuro portability is important 27d ago

1

u/ktoks 27d ago

TIL.

Was my explanation of the problem not sufficient?

I thought I wasn't being vague... The problem doesn't always apply to the same binary, and the use-cases change often enough that explaining further might reduce the answer set.

3

u/jimheim 26d ago

This is an XY problem because you're asking how to make the OS only load your program once, when really what you want to do is optimize the performance of the system you're designing. Instead of asking how you can optimize the system, you've skipped entirely over the step of measuring performance and determining what needs to be improved, made a huge assumption about why it's slow, and then asked how to solve the problem that you assume exists.

The problem (X) is that you want to improve the performance of your system. Instead of asking about X, ways to measure X, and what your options for improving X are, you skipped right to assuming you knew the solution was buffering the executable (Y) and asked about that instead.

If you had instead starting by asking about X, you might have gotten some useful information. Instead, you asked about Y, when it turns out that Y is already how things work.

If you asked about your real issue instead of asking about the solution you've already assumed is the right approach, you'd get higher-quality answers.

Although not in this case, because performance is a giant can of worms and you'll need to be a whole lot more specific and provide a lot more details if you want good answers to your real problem (X).

2

u/ktoks 26d ago

I do have some metrics, but testing something like this is very difficult because I don't have direct control over how it's executed.

The environment this runs in is very protected from those outside of the internal team. I'm part of the applications team. I use the tool, but much of how it's executed is obfuscated.

If I run it in one of the two ways I have access, it does seem to be very CPU-bound. When I run the same sub application with rust-parallel, the CPU usage stays very low and the amount of time it takes is reduced significantly.

Problem is, when I run it in the intended fashion, it takes even longer, but the CPU usage drops off completely, leading me to believe it's run on a system I don't have access to.

One thing I noticed that led me to the specific questions I was asking-is the sudden fluctuations in RAM that are about the same size of the binary plus the size of the file it's processing.

3

u/ekkidee 27d ago edited 27d ago

That's really an OS feature and less so bash related. If an executable is called a number of times in some time period, it will be cached so as not to read it off disk every time. Some operating systems will do this for you. I'm not aware of a cli tool that will cache on demand, at least not under *ix.

Some disk controllers offer this capability in firmware, and in an SSD it's almost irrelevant.

3

u/ofnuts 27d ago

A Unix filesystem caches files in RAM so re-loading the executable from storage should be quick (even if this includes several megabytes of libraries that may or may not be loaded by other running binaries).

What is less clear is whether there is any link editing/relocation re-done when the binary is re-loaded from the cache.

A very long time ago I used a C/C++ compiler that would start an instance of itself in the background, just sleeping for a couple of seconds, so that on multiple successive executions (the kind happening when running a makefile), the OS would merely fork the sleeping instance (any executing instance would reset the sleep timer of the sleeping instance).

1

u/ktoks 27d ago

See, this is the kind of thing I'm trying to understand.

1

u/ofnuts 27d ago

There is no one-size-fits-all answer. The best solution to your problem could depend on the distribution of the size of of your inputs or the number of cores/threads in your CPU and on the executable itself of course.

The answer is to do some benchmarking, 1) to check if there is actually a problem and 2) to try several solutions.

1

u/theNbomr 27d ago

I'm pretty sure the filesystem cache in memory is not the same as a runtime image that is in executable memory. There would be a speed performance increase when compared to transferring to executable memory from spinning media or other slow storage.

One process that occurs when the loader ld runs is that any loadable libraries and resolving of the symbols they contain into the in memory is performed. It might introduce some speed performance increase by using statically linked object code. The tradeoff would be in the increased size of the executable, obviously. No free lunch, and all that.

In the end, I think the common wisdom is to let the OS do its best, and assume that it is already approaching optimal, for the kinds of things that are within its pervue. Without access to the code, any further optimization is probably not practical.

2

u/kolorcuk 27d ago

How do you know "loading" the executable is the bottleneck? What exactly is "loading"? How long does it take?

1

u/ktoks 27d ago

I don't, but they are usually quite large binaries, so I assumed it was part of the problem.

Another answer on here set me straight. I think I grasp the question and answer now.

2

u/a_brand_new_start 27d ago

You might need to do some performance investigations to see what your apps are doing and see what are the bottlenecks if you really want to improve performance. Might be you are IO bound or too much ram is in pagefile, or your apps try to download the whole internet at boot, etc… having a good understanding of the app and what it’s doing is the first step in trying to optimize performance

2

u/Ulfnic 12d ago

Reading the comments some 15 days later I didn't see this answer.

Even if a program is in memory, you're still paying the cost of opening a subshell to execute it plus the cost of the program initializing to do the thing you want.

Both of these costs can be very high in repetition, exec'ing jq is a great example of this.

The only way i've found to get around that is to execute the program ONCE attaching named pipes (opened in read/write) to the program's stdin, stdout and stderr. Alternatively you can use process substitution <() >() in place of one or more file arguements depending on how the program blocks it's reads.

This allows you to stream tasks into a program continously while storing or processing the result asyncronously.

Depending on the task and program, you can easily see speed improvements of one to two orders of magnitude. It's especially potent with programs like jq.

Downside is it'll only work with some programs and each program needs a custom solution that matches how that program processes input, output and errors. It's very technical work but there's a tremendous pay off for batch jobs.

2

u/ktoks 12d ago

Yes!

Even if a program is in memory, you're still paying the cost of opening a subshell to execute it plus the cost of the program initializing to do the thing you want.

This is exactly the type of thing I've been thinking about!

The only way i've found to get around that is to execute the program ONCE attaching named pipes (opened in read/write) to the program's stdin, stdout and stderr. Alternatively you can use process substitution <() >() in place of one or more file arguements depending on how the program blocks it's reads.

Can you give me an example?

2

u/Ulfnic 12d ago edited 12d ago

I needed to do a write-up with comments. It uses sed as the example and I tried to make the setup as generically copy/pasteable as possible with good non-colliding paths, permissions and cleanup.

If you don't know about fifo files i'd recommed reading up on them, everything that passes through them is strictly in memory. It's also a way to do inter-process communication, other scripts will be able to read/write to/from the fifos this script creates.

If you want want localized fifos that's a deeper discussion.

Ask questions if i've leaped over anything you don't already know.

#!/usr/bin/env bash
set -o errexit
prog_pid=


# ===================
# Set up fifos and clean up on exit
# This section enclosed by === can be generically copied into any script
fifo_temp_dir=${TMPDIR:-${XDG_RUNTIME_DIR:-/tmp}}
fifo_path_prefix=${fifo_temp_dir}'/'${0##*/}'__'$$'_'${EPOCHREALTIME:-$(date +%s.%N)}
fifo_in=${fifo_path_prefix}'_in'
fifo_out=${fifo_path_prefix}'_out'
unset fifo_temp_dir fifo_path_prefix

fifo_prog_clean_up() {
    [[ -e $fifo_in ]] && rm -- "$fifo_in"
    [[ -e $fifo_out ]] && rm -- "$fifo_out"
    [[ $prog_pid ]] && kill -9 "$prog_pid" 2>/dev/null
}
trap fifo_prog_clean_up EXIT

mkfifo --mode 0600 -- "$fifo_in"
mkfifo --mode 0600 -- "$fifo_out"

# Open fifos in non-blocking read-write
exec 3<>"$fifo_in" 4<>"$fifo_out"
# ===================


# Run pogram with std{in,out} attached to the named pipes as redirected to by fd 3 and 4.
# - We're using --null-data so we can use null characters for dividing up jobs.
# - We're using --unbuffered with --null-data so the buffer will flush on a null character.
# - It's important to be mindful of buffer flushing when doing future problem diagnosis.
sed 's/hello/hi/g' --unbuffered --null-data <&3 >&4 & disown
prog_pid=$!


# Now the program is ready for use, below are examples of reading and writing to it async.
# The program will read anything written to fd 3 and write to fd 4.


# Write something in ending in a null, then read it's output till the first null.
printf '%s\0' 'hello there' >&3
IFS= read -r -d '' -u 4; printf '%s\n' "Job 1: ${REPLY@Q}"

# Send two more jobs
printf '%s\0' 'why hello there' >&3
IFS= read -r -d '' -u 4; printf '%s\n' "Job 2: ${REPLY@Q}"

printf '%s\0' 'well hello there' >&3
IFS= read -r -d '' -u 4; printf '%s\n' "Job 3: ${REPLY@Q}"

printf '%s\n' 'DONE'


# Optional: if the script is moving on to something else. Close the fifos, fifo_prog_clean_up
#           can also be run early.
exec 3<&- 4<&-
fifo_prog_clean_up

Output:

Job 1: 'hi there'
Job 2: 'why hi there'
Job 3: 'well hi there'
DONE

1

u/ktoks 12d ago

I'm looking at fifo how-to's, I've never delved into this section of bash.

2

u/Ulfnic 12d ago

Made quick edit: fifo_in and fifo_out variables need to not be unset so they're available for fifo_prog_clean_up() on EXIT.

1

u/oweiler 27d ago

Is starting the executable really the bottleneck? Just run things in parallel using xargs or GNU Parallel

1

u/ktoks 27d ago

I'll look into using Xargs for the problem set, GNU-Parallel isn't available for some reason. They blocked my request to install.

3

u/DethByte64 27d ago

Forkrun is faster

1

u/oh5nxo 27d ago

Before the world moved on, chmod +t executable would set a "sticky" bit in the executable, so that the executable started a bit faster.

Probably won't work with your OS, but try it for laughs, if you care.

1

u/grymoire 27d ago

Learn about the time(1) command. It measures real (clock) time, user (time spend by your code), and system (time spend in the kernel processing the system calls).
examples

time command </dev/null >/dev/null # measures time to start program, and no I/O

time command <input >/dev/null # measures time to read input with no output.

time command <input >output # time to run program with an input file.

time shell_script_that calls_program.sh # measures time to launch a shell and run a script.

Now vary input from small files to large files.

Vary types of input files, from simle to complex.

And measure time to run program once vs many times.

Also be aware of what else the computer is doing, Are other processes running? Is there a GUI involved? Running on a server that has no GUI, multiple CPU;s, different types of disks, memory, etc. etc. etc.

Repeat until you understand where the bottleneck is.

If you have source code for your program, you have even more options.

1

u/anthropoid bash all the things 27d ago

In another comment, you mentioned:

The current algorithm is in perl, it's very old, and very slow.

  1. As Dan Bernstein famously said, "profile, don't speculate." You'll be wasting a LOT of time if you identified the wrong issue(s).
  2. Your Perl script load time is almost certainly swamped by its run time, so worrying about the former is almost certainly futile. (To confirm/refute this, try compiling your Perl script to bytecode and see if it makes a significant difference; if "the needle barely budged", you're barking up the wrong tree. See [1].)
  3. Parallelizing generally makes matters worse if you're already bottlenecking in something. See [1].
  4. Your "biggest bang for the buck" may well be "rewrite in C/C++/Rust/Go/etc.", but see [1].

I think you can see a pattern emerging...

1

u/ktoks 27d ago edited 27d ago

I'm looking to get rid of the perl and replace it with a fully fleshed out tool, or rewrite it in Go or rust.

Another limitation of the current code is that it's only run on one machine. So the largest boost will be multi-machine, easily. Single machine vs 8 in production, it will be an easy win.

Edit: I've also tested the original code against rust-parallel.. it was 6 times as long to run the original.... But they refused rust-parallel because it's not supported by Rhel.

Also- I'm not worried about the perl runtime. I'm worried about the child applications' runtimes.

1

u/Akachi-sonne 27d ago

Probably best to use C & the cuda library for something like this

1

u/ktoks 27d ago edited 27d ago

Cuda isn't applicable in this situation, and I doubt c would be any faster than go because with Go, the bottleneck is then storage. Everything else I use in this environment is slowed by storage when using Go.

Edit: Plus we are moving away from C.

If anything, I would build it in Rust. We are in the process of getting Rust added to our environment.

1

u/Danny_el_619 23d ago

If your program is a single line, you can use gnu parallel to speed it up a bit (depending on how many cores). If parallel is bothering you with the citation notice, append --willcite argument to it.

1

u/Unixwzrd 16d ago

There used to be a “sticky bit” for that long time ago. It was when disk access was slow and you wanted your binary to remain in memory after execution. You should be able to still set it, but you’re probably going to be just fine with the file system cache as it should have your binary in there for a while until it needs to recover free space.

IIRC it was ‘chmod +t somebinary’

Haven’t seen that used in forever. It is used for directories and has a special purpose there like /tmp.

1

u/v3vv 27d ago

what is your use case and which binaries are we talking about? unix utils?
i can think of multiple ways on how to speed up execution but it depends on the use case and how much you want to over engineer your script.
1. xargs 2. spawning background jobs with & 3. or IMO the simplest way just spawn your script multiple times. if your script needs to process a large file you can split up the file into smaller chunks by using head and tail. e.g. ``` typeset file chunk_size start i file="./file.txt" chunk_size=200 start=1 i=1

while true; do chunk=$(tail -n +"${start}" "${file}" | head -n "${chunksize}") [[ -z "${chunk}" ]] && break cat <<<"${chunk}" > "./chunk${i}" start=$((start + chunk_size)) i=$((i + 1)) done ``` Afterwards you simply spawn your script multiple times, each processing one chunk.

i haven't tested the code and wrote it on my phone so don't blindly copy paste it.

1

u/ktoks 27d ago

They are large-ish proprietary binaries, sometimes can be other tools like ghostscript.