Date | Commit message (Collapse) |
|
Using Array#map! instead of Array#map can save us at
least one object allocation.
|
|
GET is all we need and we can save some code this way.
If we ever need to use HEAD much again, we can use
net/http/persistent since our internal HTTP classes
are only optimized for large responses.
|
|
This is only needed for users on old MogileFS servers
|
|
We'll now accept an optional argument which can be passed to
IO.copy_stream directly. This should make life easier on users
so they won't be exposed to our internals to make efficient
copies of large files.
|
|
We can use the file_info command to get things faster, now.
|
|
This was added in MogileFS 2.45
|
|
This is a command added in MogileFS 2.45
|
|
Ruby 1.9.3 considers them harmful
|
|
Using unknown sizes with StoreContent is now supported
(but you're probably better off using a pipe or just
and object that acts like an IO)
|
|
The readpartial is not in the Rack spec for rack.input
objects, but something like IO#read is.
|
|
Of course the backend server needs to support chunking,
but the latest Perlbal does.
|
|
Not some random hash from the parser.
|
|
|
|
This can be useful for streaming to a backend
(this feature needs tests)
|
|
Avoid deepening stack depth and make it easier to migrate
fully to 1.9 in the future (dropping 1.8 support).
|
|
|
|
Should be easier to read this way
|
|
Splitting calls to backend.create_open + create_close between
files is also extremely confusing and error-prone.
Hopefully nobody actually depends on some attributes
we've removed.
|
|
This is cleaner and replaces redundant code where we would retry
paths. MogileFS::MogileFS#size now raises on error instead of
returning nil.
|
|
We're trying to use as much as we can from Ruby 1.9
|
|
mogilefsd may not like that
|
|
This will need further refactoring
|
|
We won't trust Ruby 1.9 String weirdness since data storage
is locale-agnostic
|
|
It's useful to know the size of the file we're storing.
|
|
This is just like get_paths, but integrates better into Ruby
applications that use the parsed-out URI to do operations
directly on the URIs.
|
|
This cleans up some of the internal HTTP handling code a bit,
too; and does a better job of closing sockets than it did
previously.
|
|
I'm not sure if mogilefsd ever returned broken output for this,
but just in case we compact the output so users don't have to
worry about them.
|
|
This takes advantage of the (ugly) new mogilefs_size Socket
attribute to avoid duplicating Content-Length parsing code.
I really wish Net::HTTP in Ruby was actually usable...
|
|
This adds a sysread_full utility method with configurable
timeouts. Individual reads can be timed out as well as
the entire sysread_full call.
|
|
|
|
New way to call 'store_content' with a
MogileFS::Util::StoreContent allows you to roll your own method
of streaming data to mogile on an upload (instead of using a
string or file)
[ew: this still requires a known content length beforehand]
[ew: applied with --whitespace=strip, rewritten subject,
80-column wrapping]
Signed-off-by: Eric Wong <normalperson@yhbt.net>
|
|
File and StringIO objects need to be opened in binary mode,
otherwise they take the default encoding format. Thankfully,
Sockets and Tempfile objects seem to be binary by default as of
1.9.1; but it really is a mess to have to deal with FS
abstractions that try to deal with encoding crap behind your
back...
|
|
Last I checked, the trailing "return" is not optimized away by
MRI 1.8. Additionally, remove some useless temporary variables.
|
|
Don't specify an empty class (e.g. "class="), instead
just omit the parameter entirely if it is nil.
|
|
Oops, this method needs to bypass transformations we
in the normal MogileFS::MogileFS code path.
|
|
We never checked the HTTP status code when making the HEAD
request. All this HTTP stuff should probably be moved to
HTTPFile
|
|
Correctly fail when we get non-200 HTTP responses and retry on
the next URI.
|
|
Previously, when we got multiple destinations to
upload to and one of them failed, we failed to
correctly retry the next destination. This will
set the correct devid and URL.
|
|
We dropped NFS support, so this can be simplified further.
|
|
MogileFS 2.x upstream no longer supports it, and it's
become a maintenance burden and NFS is a horrible thing
anyways.
Attempting to use this with servers that support NFS will result
in MogileFS::UnsupportedPathError being raised.
|
|
"bigfile" is used by the large file support, too.
"big_io" also allows an IO object to be sent to us, as well
(and not a pathname)
|
|
This removes the dependency on unsafe methods used in the
Timeout class.
Charles makes some good points here:
http://blog.headius.com/2008/02/rubys-threadraise-threadkill-timeoutrb.html
And even matz agrees:
http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-talk/294446
Of course, I strongly dislike any unnecessary use of threads,
and implementations using native threads to do timeouts makes me
even more uncomfortable.
|
|
This should complete the integration of the read-only Mysql
backend into MogileFS::MogileFS.
|
|
Needs more tests, but it seems to work...
I seem to have discovered a bug in mogtool which causes it to
generate incorrect MD5 checksums when the --gzip flag is used
(and --gzip actually just does zlib deflate, not something that
gzip(1) can actually decrypt).
So right now MD5 checksums are only verified on non-zlib-deflated
files.
|
|
We'll be reusing it in the big file module
|
|
This was leading to ugly "no_temp_file" errors that
got converted to exceptions.
|
|
|
|
The ArgumentErrors happen at initialization time, so
I'll keep those as-is
|
|
Also easier to trap and deal with
|
|
|