Made -exec (stdin ver) work via piping hack (XXX: We need to find how to pass file directly.) (TODO: -exec{} still doesn"t work: `No such file or directory` error when accessing /dev/fd/{fd} *and* /proc/{pid}/{fd}?? Idk why...
.stdin(file.as_ref().map(|file|process::Stdio::piped()/*::from(file)*/).unwrap_or_else(||process::Stdio::null()))//XXX: Maybe change to `piped()` and `io::copy()` from begining (using pread()/send_file()/copy_file_range()?)
//TODO: We should establish a max memory threshold for this to prevent full system OOM: Output a warning message if it exceeeds, say, 70-80% of free memory (not including used by this program (TODO: How do we calculate this efficiently?)), and fail with an error if it exceeds 90% of memory... Or, instead of using free memory as basis of the requirement levels on the max size of the memory file, use max memory? Or just total free memory at the start of program? Or check free memory each time (slow!! probably not this one...). Umm... I think basing it off total memory would be best; perhaps make the percentage levels user-configurable at compile time (and allow the user to set the memory value as opposed to using the total system memory at runtime.) or runtime (compile-time preffered; use that crate that lets us use TOML config files at comptime (find it pretty easy by looking through ~/work's rust projects, I've used it before.))
//TODO: We should establish a max memory threshold for this to prevent full system OOM: Output a warning message if it exceeeds, say, 70-80% of free memory (not including used by this program (TODO: How do we calculate this efficiently?)), and fail with an error if it exceeds 90% of memory... Or, instead of using free memory as basis of the requirement levels on the max size of the memory file, use max memory? Or just total free memory at the start of program? Or check free memory each time (slow!! probably not this one...). Umm... I think basing it off total memory would be best; perhaps make the percentage levels user-configurable at compile time (and allow the user to set the memory value as opposed to using the total system memory at runtime.) or runtime (compile-time preffered; use that crate that lets us use TOML config files at comptime (find it pretty easy by looking through ~/work's rust projects, I've used it before.))
//TODO: maybe look into fd SEALing? Maybe we can prevent a consumer process from reading from stdout until we've finished the transfer. The name SEAL sounds like it might have something to do with that?
//TODO: maybe look into fd SEALing? Maybe we can prevent a consumer process from reading from stdout until we've finished the transfer. The name SEAL sounds like it might have something to do with that?
letexecfile;
cfg_if!{
cfg_if!{
if#[cfg(feature="memfile")]{
if#[cfg(feature="memfile")]{
work::memfd()
execfile=work::memfd()
.wrap_err("Operation failed").with_note(||"Stragery was `memfd`")?;
.wrap_err("Operation failed").with_note(||"Stragery was `memfd`")?;
}else{
}else{
work::buffered()
execfile=work::buffered()
.wrap_err("Operation failed").with_note(||"Strategy was `buffered`")?;
.wrap_err("Operation failed").with_note(||"Strategy was `buffered`")?;
.with_warning(||format!("It is possible fd {} (STDOUT_FILENO) has already been closed; if so, look for where that happens and prevent it. `stdout` should be closed here.",stdout_fd).header("Possible bug"))
.with_warning(||format!("It is possible fd {} (STDOUT_FILENO) has already been closed; if so, look for where that happens and prevent it. `stdout` should be closed here.",stdout_fd).header("Possible bug"))