Valgrind

boetes.org


Skip blocking sync when we have lots of space available. Always check
space a while after sync, we still got the sync timer so we know which
were asynced within a certain timeframe. Thus we can know if we
perhaps need to perform hash check.

Add a 'possibly corrupt' flag or such, which can be heeded by resume
save?



Properly set the max queue size etc.

Don't trigger hash done when calling hash checked when already done?


Fix the hashing thing so we can pause hash checks.


- In Handshake::prepare_peer_info(), check if peer id already exists?

Move client id stuff into libtorrent?


When receiving connection from the same id/address, check the current
timeout of the previous connection?



? Add lots of checks to the curl_* code.


The scheduled early borkage:

We're queuing up loading of torrents. So is the problem that we're
triggering the loading of watched torrents too early?

Make sure we don't erase Downloads we've just created, not inserted.



- Is there a way to identify if some files are on the same fs, not
  nessesarily 100%.


Start working on a new tracker class, or settings for syncing etc?


A seperate skipped/overhead rate?


Allow for greater control over tracker messages etc.

Make loading of initial torrents into a queue thingie?


- View::received(..., SLOTS_*) could not find download.


Make more hashing finished stuff generic in recv_hash done? Or
something that can be shared by both recv and done.


Make a generic failed state, replacing hashing failed.

Do we need to set "state=0" when something failes?


Add more states, 'deprecate' 1 for a while:

* stopped, 0.
* idle? allow hash check etc?, 1
* forced start, 2
* scheduled, 3

Or rather make hashing truly orthogonal?

We need to indicate that a downlood shouldn't be hashed, and when it
merely happens to be closed.

manual/recommend_start?



Trackers: Add information about failed/success, move slots to
TrackerList.



Assign bad peer info upon success of a chunk with failed pieces?

Use a flag for possibly bad peers?



! Too fast scheduled action borks view.

Make it possible to add events upon various state changes. Use
varibles?

Consider adding a namespace/subdir for data/disk related stuff, like
Block*, a new Memory thing, etc.

- display ignore_ratio.

Each BlockTransfer has the index of the one they equal. ~0 for not
downloading/what is in the mmap.

When a transfer differs, or when a failed hash is received, add a new
piece buffer. The transfer sets the index in the vector.

Clear erased from the block, so that differing pieces don't get
included.

How do we handle unfinished, differing, transfers? We create a new
m_failed buffer, do we just keep the ref-count == 0?

Don't bother with more than one transfer atm. When dissimilar just
throw it out. ATM only add to the m_failed vector on hash fail.


When a leading transfer is erased, reset the current position. Remove
any transfers that ended up being dissimilar. Change leader.

If the dissimilar peer is further ahead, consider switching?


Throttle skipping of data.

Check that reading a chunk from initial buffer is properly handled for
all paths.


When one transfer passes another, it becomes the leader. (Save the
leader in block?) If there is a data mismatch then the leader becomes
the owner, even if it is slow. (Except if it is stalled, then copy the
data and change leader)

If they are differing, but the current one is very fast and passes the
previous, then replace.

On failed hash check we know who did the whole block, the leader an
those who did not get marked as being different. Do we use stalled
count here to decide to switch leader?

On re-download we need to also check against the previous data.

Make a test build that stops certain downloads until some other peer
can fill in.


Mark stalled pieces.


Start a new chunk when we got queue slots available to cover a whole
chunk.

Try to wait until one has a whole chunk of slots available. (assuming
we're transfering fast enough)


Bad erase from priority queue by one of the http gets. (Memory corruption?)


Rework remove_invalid, don't apply each time we call downloading.


Boolean in BlockTransfer indicating whetever it matches what was
download previously.

When multiple sources download a single block, make the leader the
owner. Compare the data. If different, flag?

Failure mode activates after first hash fail. Start comparing
data. Use immediate hash checking so we don't need to re-request
pieces etc.


Add counter to BlockList for number of failed hash checks.

Seperate handler for delegating to failed chunks.

Perhaps throw out transfers that did not contribute data to the failed
chunk.

Don't be so obsessed about good performance on small swarms with
multiple bad seeders.


event_write on handshake incoming.


Enable throttle on skip?

Change ThrottleList to a vector?


Does ratio take care that it doesn't bork on hashing?


replace PI in handshake with iter?

RequestList::is_interested_in_active does not check if the chunk is
finished. Might need to wait for hash check.

Account for skipped pieces.

rtorrent: PeerList::disconnected(...) itr == range.second.


Strict requirement for open/close on the current state?

Close not working properly, also make closing active download bork.

Check that all options use K/M/G correctly.

Move the catches outside of DownloadList.

When setting upload_rate, make sure max_unchoked is valid.

Move BITFIELD handling? Put another link between handshake and
PeerConnection. Make various buffers swappable/movable.

Don't silently drop connections that fail certain setup calls.

Show currently downloading.

Do we want to return some of the other bitfields?

Tracker seeding counter etc?

connection_list, handshake_manager. Make a template, and all classes
have peer_info() function for accessing?

* Move various timers?
* Request Queue

element_tracker_list use TrackerList.

Would it be possible to do 8-9 and's and bitshifts faster than for
loop checking ceiling?


ConnectionList::erase_remaining(iterator pos) - Need to save the itr?

Make the download_list handlers into rak::function's.

Add messages to the network log for bad settings, or merely ignore them?

Consider adding checks for r/wmem_max if available.

Read about up_dynaddr

Get rid of globals.h.

Don't ignore lines that end with EOF.

Consider how failed pieces affect interested state.

"Peer requested a piece with invalid index or length/offset." - Enable
the extra debug information and figure this one out.

sprintf std::min usage might be wrong.

Heuristics for choosing the scrape to show.

Udp needs to go through socketmanager.

TrackerUdp::parse_url should happen only once.

Add a function in core::Download to update various resume data.

read request checks choke in two places and adds to write poll.

Test incoming.

rak/functional, some of those do copy-by-value...

echo | openssl s_client -connect some.server.somewhere:443 | grep -A 100 'BEGIN CERTIFICATE' | grep -B 100 'END CERTIFICATE' >> /usr/share/curl/curl-ca-bundle.crt


***

Filter based on SocketAddress and TrackerInfo? It should be a single
call for both. Modify PeerInfo and make it "persistant"? Would need to
make it lightweight.

Or perhaps outgoing connections use the PeerInfo they make during the
initialization of the handshake.

But it seems it needs to be two-stepped anyway. First to filter purely
on address, second to change settings.


Make a class that holds 'static' information about a download, the
pointer will also be the unique identifier for that download. This
ofcourse assumes that every user of that pointer as id stop using it
once the download is erased.

Check incoming connections.

Move the activity checks out of DownloadManager::find_info.

***

Option for saving session torrents. Allow it to save for all, open,
active or other criteria.


Show seeders?

Make a display update before starting torrents.

Bus error in display.

Check if we properly clear failed downloads.

Socks4 patch, look at it; reply.

Check lag issue.

Consider session directory with empty path. Do we disable when set, etc?

Option to create session and watch directories.

Make selecting the next tracker upon fail explicit.

Proper lazy signal handling.



Note about:

* Schduling rate change.

- Changing priority +/-, man and user guide.

- Date time in schedule.

- Safe to use untied_remove.

- ^O and ^P.



Allow finished torrents to be moved.

Cleanup torrent::Download::set_root_dir

Caught exception: DelegatorPiece dtor called on an object that still has reservees

Add a boolean Variable.

<hnsk> when there's a torrent hashing and another torrent finishes,
would it be possible for it to say waiting for hash check instead of
just inactive

Move set_root_directory cleanup into libtorrent. Allow empty string for "./"?

Consider setting POSIX or C locales.

Go over all the options/flags, make sure we use correct variablemap set.

Add VariableSlotValueValue.

Class that restricts input? A yes/no thingie? Would save on space though.

Better error messages on bad config options?

Consider where Directory's dotHide parameter is needed.

Don't include .. and . when doing wildcard. Need to ensure ".." only gets matched when ".." is used.

Fix up the spliter.

Look at PCLeech::read_have_chunk FIXME.

Clean up the client-side handling of listening port open/reopen/close.

Don't allow multiple connections from the same IP to the same
torrent. Also do something about incoming connections. Only let them
open N handshakes.

Don't use std::make_heap in rak::priority_queue, test that this is
safe.

Replace PCB::m_tryRequest with is_up_interested?

Make the DownloadFactory's copyable/modifiable and stuff so they can
be used as templates. Consider whetever they should depend on
variables etc.

Ponder listen/ip/bind naming, listen should perhaps be bind?


===

Look into replacing SocketAddress with some kind of thing using
getaddrinfo.

Display leechers/seeders.

Make sure storage error is correctly displayed, rather than being
hidden.

Different min peers for seeding and leeching modes?


=== For the next API change ===

Rename hash_resume_save/load, it should be generic.

Split the resume interface.

Add API for checking the existance of files before we open stuff. BTW,
should we perhaps do lazy create on files not currently present? We
can add stuff to Entry to check this. Shouldn't create files unless
they are marked for download? But that might bork when we need to map
those regions.

===


=== Hash Resume save ===

Would need to make sure all chunks have been flushed.

===


<zzorn> if you think .file could be seriously abused, then feel free
to block (or perhaps warn??) about it

Move task scheduler to rak.

Testcase for mmap bug? Also look into dns lookup cache.

Set an upper limit on the number of send request a peer can queue.


=== Printing to display ===

Clean this up, especially file list.


Make sure to verify that the single file torrent gets a valid path,
also make sure no file can start with "." "..".

Look into iconv.

Add checks to ensure requested pieces are valid. Length etc. Also
fiddle with max queue length. This all needs to be cleaned up.

Fiddle pipe size.


== Delegator refactoring ==

Consider ways of avoid use of Delegator::slotChunkSize.

Allow some client request to span more than (1<<14) bytes. 

====


Consider moving size checking outside of File::get_chunk.

Consider adding a check to make sure info_hash matches the session
torrent's file name?

Do chokes on long time unchoked peers when seeding to spread the upload more.

***

Make sure that when we stop a torrent and disconnects, that those
don't cause unchokes of soon to be disconnected peers. Create a
function that chokes all and doesn't allow new unchokes.

Move download initializiation stuff into Manager.

Rename most things to remote/local or something. up/down is too
confusing. Atleast add it to some of the function names.

Would be nice to refactor requester, it should own the bitfield in some way?

delegator's priorities are reset on DownloadMain::open? Make the
priority thing work right, don't use a seperate function for updating
them.

Consider asking for read-ahead for fast peers.

Unaligned 58.6.0.92     0.0

Do move of session torrent so we don't end up with incomplete torrent
files in case of crashes.

Make it possible to disable certain files, not causing an error when
they don't give rw perm or resize?

Clearer display of which torrents are stopped, make it into a seperate
column.

Re-add slot for removing incoming connections from available list? If
so, remove the check in connect_peers.

Move endgame detection to delegator?

Rename TrackerControl?

Remove get_ for certain class of accesses, also make them pointers to
put emphasis on the fact that they arn't logically behaving like
return-by-value types.

Add an intersection version off HandshakeManager::has_address()?

Clean up the header includes around the codebase, need to reduce the
amount of memory required to compile libtorrent.

Send mail to OpenBSD to see how they handle madvise.

Use a seperate function for handling epoll_ctl calls? With event and
op as args?

Consider renaming Poll::open/close.

Improve the settings code, it should read all settings before
initialization. Make some seperate settings class that holds all the
settings, that the user can view during runtime.

Look over the max open sockets settings etc, it should be
consistent. One should have a choice between using a setting that
changes sysconf(_SC_OPEN_MAX), or get the max value from this variable.

Add a setting for controlling whetever allocate is called at torrent
creation, chunk creation or not at all.

XFS reserve should check for error return code, if it fails due to
lack of diskspace it should return false, otherwise true.

Clean up the settings code by moving the validation to the apply'er.

Use typedef for port?

Add filter for ip address, and peer id. These must be seperate as ip
happens before handshake, but peer id is done after handshake or
before connecting.

Fix hash checking slowness on overworked boxens. Possibly mark the
mmap'ed area as sequential. Add options to the man page.

Expand ~/ in input.

Rewrite the chunk handler, use a vector that msyncs continious chunks
when it feels like it. Use a task to check periodically, or when
receiving chunks. Try to make sure we do sequential download of
chunks, instead of completely random.

Add a slot in core::DownloadList that makes it safe to delete
downloads at all times.

Use a seperate function for checking the hash status when doing
download starts.


*** STUFF I MIGHT DO SOME DAY ***

After 25 seconds, choke a peer if no requests have been received?

Add a snub factor. The more you upload without getting back, the
higher it goes. Add a start buffer.

Add config for how often we unchoke unknown peers vs good uploaders

std::isalpha in escape string

torrent::Http::call_cleanup to delete the factory created object?

If we get conflicting prot flags in get_chunk, we might go into cycle
of re-allocating. Do a union instead.


DELEGATOR STUFF

Proper canceling of pieces in sendHave, unless only one can download a
piece.

Make set endgame take priorities into account.

Don't be so aggressive at selecting stalled pieces.


AFTER API REDESIGN:

Consider ways of optimizing bitfield memory usage. A bitfield with all
set shouldn't change ever... And do a count on number of set bits?


OPTIMIZING

Does it make sense to have *.bt_part or similar suffixes on partially
downloaded files? This would be uncompatible with other clients, but
make it configurable? This is fully client side, though need to
support a way to move files.

make sure handshake gets "" id if the download does not want
connections, or some other filter system.


DOCUMENTATION

As far as documentation is concerned: a step through example that
shows all the torren::init calls and sets up a download and doxygen
comments in the header files would be great.


TRACKER STUFF

Make sure adding trackers doesn't invalidate current requests,
disallow for open torrents?


Move mkdir stuff out of Path?

Lock file in session directory.
