##
## GNU Pth - The GNU Portable Threads
## Copyright (c) 1999-2004 Ralf S. Engelschall
##
## This file is part of GNU Pth, a non-preemptive thread scheduling
## library which can be found at http://www.gnu.org/software/pth/.
##
## This library is free software; you can redistribute it and/or
## modify it under the terms of the GNU Lesser General Public
## License as published by the Free Software Foundation; either
## version 2.1 of the License, or (at your option) any later version.
##
## This library is distributed in the hope that it will be useful,
## but WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## Lesser General Public License for more details.
##
## You should have received a copy of the GNU Lesser General Public
## License along with this library; if not, write to the Free Software
## Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
## USA, or contact Ralf S. Engelschall .
##
## pth.pod: Pth manual page
##
# ``Real programmers don't document.
# Documentation is for wimps who can't
# read the listings of the object deck.''
=pod
=head1 NAME
B - GNU Portable Threads
=head1 VERSION
GNU Pth PTH_VERSION_STR
=head1 SYNOPSIS
=over 4
=item B
pth_init,
pth_kill,
pth_ctrl,
pth_version.
=item B
pth_attr_of,
pth_attr_new,
pth_attr_init,
pth_attr_set,
pth_attr_get,
pth_attr_destroy.
=item B
pth_spawn,
pth_once,
pth_self,
pth_suspend,
pth_resume,
pth_yield,
pth_nap,
pth_wait,
pth_cancel,
pth_abort,
pth_raise,
pth_join,
pth_exit.
=item B
pth_fdmode,
pth_time,
pth_timeout,
pth_sfiodisc.
=item B
pth_cancel_point,
pth_cancel_state.
=item B
pth_event,
pth_event_typeof,
pth_event_extract,
pth_event_concat,
pth_event_isolate,
pth_event_walk,
pth_event_status,
pth_event_free.
=item B
pth_key_create,
pth_key_delete,
pth_key_setdata,
pth_key_getdata.
=item B
pth_msgport_create,
pth_msgport_destroy,
pth_msgport_find,
pth_msgport_pending,
pth_msgport_put,
pth_msgport_get,
pth_msgport_reply.
=item B
pth_cleanup_push,
pth_cleanup_pop.
=item B
pth_atfork_push,
pth_atfork_pop,
pth_fork.
=item B
pth_mutex_init,
pth_mutex_acquire,
pth_mutex_release,
pth_rwlock_init,
pth_rwlock_acquire,
pth_rwlock_release,
pth_cond_init,
pth_cond_await,
pth_cond_notify,
pth_barrier_init,
pth_barrier_reach.
=item B
pth_uctx_create,
pth_uctx_make,
pth_uctx_switch,
pth_uctx_destroy.
=item B
pth_sigwait_ev,
pth_accept_ev,
pth_connect_ev,
pth_select_ev,
pth_poll_ev,
pth_read_ev,
pth_readv_ev,
pth_write_ev,
pth_writev_ev,
pth_recv_ev,
pth_recvfrom_ev,
pth_send_ev,
pth_sendto_ev.
=item B
pth_nanosleep,
pth_usleep,
pth_sleep,
pth_waitpid,
pth_system,
pth_sigmask,
pth_sigwait,
pth_accept,
pth_connect,
pth_select,
pth_pselect,
pth_poll,
pth_read,
pth_readv,
pth_write,
pth_writev,
pth_pread,
pth_pwrite,
pth_recv,
pth_recvfrom,
pth_send,
pth_sendto.
=back
=head1 DESCRIPTION
____ _ _
| _ \| |_| |__
| |_) | __| '_ \ ``Only those who attempt
| __/| |_| | | | the absurd can achieve
|_| \__|_| |_| the impossible.''
B is a very portable POSIX/ANSI-C based library for Unix platforms which
provides non-preemptive priority-based scheduling for multiple threads of
execution (aka `multithreading') inside event-driven applications. All threads
run in the same address space of the application process, but each thread has
its own individual program counter, run-time stack, signal mask and C
variable.
The thread scheduling itself is done in a cooperative way, i.e., the threads
are managed and dispatched by a priority- and event-driven non-preemptive
scheduler. The intention is that this way both better portability and run-time
performance is achieved than with preemptive scheduling. The event facility
allows threads to wait until various types of internal and external events
occur, including pending I/O on file descriptors, asynchronous signals,
elapsed timers, pending I/O on message ports, thread and process termination,
and even results of customized callback functions.
B also provides an optional emulation API for POSIX.1c threads
(`Pthreads') which can be used for backward compatibility to existing
multithreaded applications. See B's pthread(3) manual page for
details.
=head2 Threading Background
When programming event-driven applications, usually servers, lots of
regular jobs and one-shot requests have to be processed in parallel.
To efficiently simulate this parallel processing on uniprocessor
machines, we use `multitasking' -- that is, we have the application
ask the operating system to spawn multiple instances of itself. On
Unix, typically the kernel implements multitasking in a preemptive and
priority-based way through heavy-weight processes spawned with fork(2).
These processes usually do I share a common address space. Instead
they are clearly separated from each other, and are created by direct
cloning a process address space (although modern kernels use memory
segment mapping and copy-on-write semantics to avoid unnecessary copying
of physical memory).
The drawbacks are obvious: Sharing data between the processes is
complicated, and can usually only be done efficiently through shared
memory (but which itself is not very portable). Synchronization is
complicated because of the preemptive nature of the Unix scheduler
(one has to use I locks, etc). The machine's resources can be
exhausted very quickly when the server application has to serve too many
long-running requests (heavy-weight processes cost memory). And when
each request spawns a sub-process to handle it, the server performance
and responsiveness is horrible (heavy-weight processes cost time to
spawn). Finally, the server application doesn't scale very well with the
load because of these resource problems. In practice, lots of tricks
are usually used to overcome these problems - ranging from pre-forked
sub-process pools to semi-serialized processing, etc.
One of the most elegant ways to solve these resource- and data-sharing
problems is to have multiple I threads of execution
inside a single (heavy-weight) process, i.e., to use I.
Those I usually improve responsiveness and performance of the
application, often improve and simplify the internal program structure,
and most important, require less system resources than heavy-weight
processes. Threads are neither the optimal run-time facility for all
types of applications, nor can all applications benefit from them. But
at least event-driven server applications usually benefit greatly from
using threads.
=head2 The World of Threading
Even though lots of documents exists which describe and define the world
of threading, to understand B, you need only basic knowledge about
threading. The following definitions of thread-related terms should at
least help you understand thread programming enough to allow you to use
B.
=over 2
=item B B vs. B
A process on Unix systems consists of at least the following fundamental
ingredients: I, I, I, I, I, I, I, I. On every process switch, the kernel
saves and restores these ingredients for the individual processes. On
the other hand, a thread consists of only a private program counter,
stack memory, stack pointer and signal table. All other ingredients, in
particular the virtual memory, it shares with the other threads of the
same process.
=item B B vs. B threading
Threads on a Unix platform traditionally can be implemented either
inside kernel-space or user-space. When threads are implemented by the
kernel, the thread context switches are performed by the kernel without
the application's knowledge. Similarly, when threads are implemented in
user-space, the thread context switches are performed by an application
library, without the kernel's knowledge. There also are hybrid threading
approaches where, typically, a user-space library binds one or more
user-space threads to one or more kernel-space threads (there usually
called light-weight processes - or in short LWPs).
User-space threads are usually more portable and can perform faster
and cheaper context switches (for instance via swapcontext(2) or
setjmp(3)/longjmp(3)) than kernel based threads. On the other hand,
kernel-space threads can take advantage of multiprocessor machines and
don't have any inherent I/O blocking problems. Kernel-space threads are
usually scheduled in preemptive way side-by-side with the underlying
processes. User-space threads on the other hand use either preemptive or
non-preemptive scheduling.
=item B B vs. B thread scheduling
In preemptive scheduling, the scheduler lets a thread execute until a
blocking situation occurs (usually a function call which would block)
or the assigned timeslice elapses. Then it detracts control from the
thread without a chance for the thread to object. This is usually
realized by interrupting the thread through a hardware interrupt
signal (for kernel-space threads) or a software interrupt signal (for
user-space threads), like C or C. In non-preemptive
scheduling, once a thread received control from the scheduler it keeps
it until either a blocking situation occurs (again a function call which
would block and instead switches back to the scheduler) or the thread
explicitly yields control back to the scheduler in a cooperative way.
=item B B vs. B
Concurrency exists when at least two threads are I at the
same time. Parallelism arises when at least two threads are I
simultaneously. Real parallelism can be only achieved on multiprocessor
machines, of course. But one also usually speaks of parallelism or
I in the context of preemptive thread scheduling
and of I in the context of non-preemptive thread
scheduling.
=item B B
The responsiveness of a system can be described by the user visible
delay until the system responses to an external request. When this delay
is small enough and the user doesn't recognize a noticeable delay,
the responsiveness of the system is considered good. When the user
recognizes or is even annoyed by the delay, the responsiveness of the
system is considered bad.
=item B B, B and B functions
A reentrant function is one that behaves correctly if it is called
simultaneously by several threads and then also executes simultaneously.
Functions that access global state, such as memory or files, of course,
need to be carefully designed in order to be reentrant. Two traditional
approaches to solve these problems are caller-supplied states and
thread-specific data.
Thread-safety is the avoidance of I, i.e., situations
in which data is set to either correct or incorrect value depending
upon the (unpredictable) order in which multiple threads access and
modify the data. So a function is thread-safe when it still behaves
semantically correct when called simultaneously by several threads (it
is not required that the functions also execute simultaneously). The
traditional approach to achieve thread-safety is to wrap a function body
with an internal mutual exclusion lock (aka `mutex'). As you should
recognize, reentrant is a stronger attribute than thread-safe, because
it is harder to achieve and results especially in no run-time contention
between threads. So, a reentrant function is always thread-safe, but not
vice versa.
Additionally there is a related attribute for functions named
asynchronous-safe, which comes into play in conjunction with signal
handlers. This is very related to the problem of reentrant functions. An
asynchronous-safe function is one that can be called safe and without
side-effects from within a signal handler context. Usually very few
functions are of this type, because an application is very restricted in
what it can perform from within a signal handler (especially what system
functions it is allowed to call). The reason mainly is, because only a
few system functions are officially declared by POSIX as guaranteed to
be asynchronous-safe. Asynchronous-safe functions usually have to be
already reentrant.
=back
=head2 User-Space Threads
User-space threads can be implemented in various way. The two
traditional approaches are:
=over 3
=item B<1.>
B
Here the global procedures of the application are split into small
execution units (each is required to not run for more than a few
milliseconds) and those units are implemented by separate functions.
Then a global matrix is defined which describes the execution (and
perhaps even dependency) order of these functions. The main server
procedure then just dispatches between these units by calling one
function after each other controlled by this matrix. The threads are
created by more than one jump-trail through this matrix and by switching
between these jump-trails controlled by corresponding occurred events.
This approach gives the best possible performance, because one can
fine-tune the threads of execution by adjusting the matrix, and the
scheduling is done explicitly by the application itself. It is also very
portable, because the matrix is just an ordinary data structure, and
functions are a standard feature of ANSI C.
The disadvantage of this approach is that it is complicated to write
large applications with this approach, because in those applications
one quickly gets hundreds(!) of execution units and the control flow
inside such an application is very hard to understand (because it is
interrupted by function borders and one always has to remember the
global dispatching matrix to follow it). Additionally, all threads
operate on the same execution stack. Although this saves memory, it is
often nasty, because one cannot switch between threads in the middle of
a function. Thus the scheduling borders are the function borders.
=item B<2.>
B
Here the idea is that one programs the application as with forked
processes, i.e., one spawns a thread of execution and this runs from the
begin to the end without an interrupted control flow. But the control
flow can be still interrupted - even in the middle of a function.
Actually in a preemptive way, similar to what the kernel does for the
heavy-weight processes, i.e., every few milliseconds the user-space
scheduler switches between the threads of execution. But the thread
itself doesn't recognize this and usually (except for synchronization
issues) doesn't have to care about this.
The advantage of this approach is that it's very easy to program,
because the control flow and context of a thread directly follows
a procedure without forced interrupts through function borders.
Additionally, the programming is very similar to a traditional and well
understood fork(2) based approach.
The disadvantage is that although the general performance is increased,
compared to using approaches based on heavy-weight processes, it is decreased
compared to the matrix-approach above. Because the implicit preemptive
scheduling does usually a lot more context switches (every user-space context
switch costs some overhead even when it is a lot cheaper than a kernel-level
context switch) than the explicit cooperative/non-preemptive scheduling.
Finally, there is no really portable POSIX/ANSI-C based way to implement
user-space preemptive threading. Either the platform already has threads,
or one has to hope that some semi-portable package exists for it. And
even those semi-portable packages usually have to deal with assembler
code and other nasty internals and are not easy to port to forthcoming
platforms.
=back
So, in short: the matrix-dispatching approach is portable and fast, but
nasty to program. The thread scheduling approach is easy to program,
but suffers from synchronization and portability problems caused by its
preemptive nature.
=head2 The Compromise of Pth
But why not combine the good aspects of both approaches while avoiding
their bad aspects? That's the goal of B. B implements
easy-to-program threads of execution, but avoids the problems of
preemptive scheduling by using non-preemptive scheduling instead.
This sounds like, and is, a useful approach. Nevertheless, one has to
keep the implications of non-preemptive thread scheduling in mind when
working with B. The following list summarizes a few essential
points:
=over 2
=item B
B.
This is, because it uses a nifty and portable POSIX/ANSI-C approach for
thread creation (and this way doesn't require any platform dependent
assembler hacks) and schedules the threads in non-preemptive way (which
doesn't require unportable facilities like C). On the other
hand, this way not all fancy threading features can be implemented.
Nevertheless the available facilities are enough to provide a robust and
full-featured threading system.
=item B
B.
The reason is the non-preemptive scheduling. Number-crunching
applications usually require preemptive scheduling to achieve
concurrency because of their long CPU bursts. For them, non-preemptive
scheduling (even together with explicit yielding) provides only the old
concept of `coroutines'. On the other hand, event driven applications
benefit greatly from non-preemptive scheduling. They have only short
CPU bursts and lots of events to wait on, and this way run faster under
non-preemptive scheduling because no unnecessary context switching
occurs, as it is the case for preemptive scheduling. That's why B
is mainly intended for server type applications, although there is no
technical restriction.
=item B
B.
This nice fact exists again because of the nature of non-preemptive
scheduling, where a function isn't interrupted and this way cannot be
reentered before it returned. This is a great portability benefit,
because thread-safety can be achieved more easily than reentrance
possibility. Especially this means that under B more existing
third-party libraries can be used without side-effects than it's the case
for other threading systems.
=item B
B.
This means that B runs on almost all Unix kernels, because the
kernel does not need to be aware of the B threads (because they
are implemented entirely in user-space). On the other hand, it cannot
benefit from the existence of multiprocessors, because for this, kernel
support would be needed. In practice, this is no problem, because
multiprocessor systems are rare, and portability is almost more
important than highest concurrency.
=back
=head2 The life cycle of a thread
To understand the B Application Programming Interface (API), it
helps to first understand the life cycle of a thread in the B
threading system. It can be illustrated with the following directed
graph:
NEW
|
V
+---> READY ---+
| ^ |
| | V
WAITING <--+-- RUNNING
|
: V
SUSPENDED DEAD
When a new thread is created, it is moved into the B queue of the
scheduler. On the next dispatching for this thread, the scheduler picks
it up from there and moves it to the B queue. This is a queue
containing all threads which want to perform a CPU burst. There they are
queued in priority order. On each dispatching step, the scheduler always
removes the thread with the highest priority only. It then increases the
priority of all remaining threads by 1, to prevent them from `starving'.
The thread which was removed from the B queue is the new
B thread (there is always just one B thread, of
course). The B thread is assigned execution control. After
this thread yields execution (either explicitly by yielding execution
or implicitly by calling a function which would block) there are three
possibilities: Either it has terminated, then it is moved to the B
queue, or it has events on which it wants to wait, then it is moved into
the B queue. Else it is assumed it wants to perform more CPU
bursts and immediately enters the B queue again.
Before the next thread is taken out of the B queue, the
B queue is checked for pending events. If one or more events
occurred, the threads that are waiting on them are immediately moved to
the B queue.
The purpose of the B queue has to do with the fact that in B
a thread never directly switches to another thread. A thread always
yields execution to the scheduler and the scheduler dispatches to the
next thread. So a freshly spawned thread has to be kept somewhere until
the scheduler gets a chance to pick it up for scheduling. That is
what the B queue is for.
The purpose of the B queue is to support thread joining. When a
thread is marked to be unjoinable, it is directly kicked out of the
system after it terminated. But when it is joinable, it enters the
B queue. There it remains until another thread joins it.
Finally, there is a special separated queue named B, to where
threads can be manually moved from the B, B or B
queues by the application. The purpose of this special queue is to
temporarily absorb suspended threads until they are again resumed by
the application. Suspended threads do not cost scheduling or event
handling resources, because they are temporarily completely out of the
scheduler's scope. If a thread is resumed, it is moved back to the queue
from where it originally came and this way again enters the schedulers
scope.
=head1 APPLICATION PROGRAMMING INTERFACE (API)
In the following the B I (API)
is discussed in detail. With the knowledge given above, it should now
be easy to understand how to program threads with this API. In good
Unix tradition, B functions use special return values (C
in pointer context, C in boolean context and C<-1> in integer
context) to indicate an error condition and set (or pass through) the
C system variable to pass more details about the error to the
caller.
=head2 Global Library Management
The following functions act on the library as a whole. They are used to
initialize and shutdown the scheduler and fetch information from it.
=over 4
=item int B(void);
This initializes the B library. It has to be the first B API
function call in an application, and is mandatory. It's usually done at
the begin of the main() function of the application. This implicitly
spawns the internal scheduler thread and transforms the single execution
unit of the current process into a thread (the `main' thread). It
returns C on success and C on error.
=item int B(void);
This kills the B library. It should be the last B API function call
in an application, but is not really required. It's usually done at the end of
the main function of the application. At least, it has to be called from within
the main thread. It implicitly kills all threads and transforms back the
calling thread into the single execution unit of the underlying process. The
usual way to terminate a B application is either a simple
`C' in the main thread (which waits for all other threads to
terminate, kills the threading system and then terminates the process) or a
`C' (which immediately kills the threading system and
terminates the process). The pth_kill() return immediately with a return
code of C if it is not called from within the main thread. Else it
kills the threading system and returns C.
=item long B(unsigned long I, ...);
This is a generalized query/control function for the B library. The
argument I is a bitmask formed out of one or more CI
queries. Currently the following queries are supported:
=over 4
=item C
This returns the total number of threads currently in existence. This query
actually is formed out of the combination of queries for threads in a
particular state, i.e., the C query is equal to the
OR-combination of all the following specialized queries:
C for the number of threads in the
new queue (threads created via pth_spawn(3) but still not
scheduled once), C for the number of
threads in the ready queue (threads who want to do CPU bursts),
C for the number of running threads
(always just one thread!), C for
the number of threads in the waiting queue (threads waiting for
events), C for the number of
threads in the suspended queue (threads waiting to be resumed) and
C for the number of threads in the new queue
(terminated threads waiting for a join).
=item C
This requires a second argument of type `C' (pointer to a floating
point variable). It stores a floating point value describing the exponential
averaged load of the scheduler in this variable. The load is a function from
the number of threads in the ready queue of the schedulers dispatching unit.
So a load around 1.0 means there is only one ready thread (the standard
situation when the application has no high load). A higher load value means
there a more threads ready who want to do CPU bursts. The average load value
updates once per second only. The return value for this query is always 0.
=item C
This requires a second argument of type `C' which identifies a
thread. It returns the priority (ranging from C to
C) of the given thread.
=item C
This requires a second argument of type `C' which identifies a
thread. It returns the name of the given thread, i.e., the return value of
pth_ctrl(3) should be casted to a `C'.
=item C
This requires a second argument of type `C' to which a summary
of the internal B library state is written to. The main information
which is currently written out is the current state of the thread pool.
=item C
This requires a second argument of type `C' which specified whether
the B scheduler favours new threads on startup, i.e., whether
they are moved from the new queue to the top (argument is C) or
middle (argument is C) of the ready queue. The default is to
favour new threads to make sure they do not starve already at startup,
although this slightly violates the strict priority based scheduling.
=back
The function returns C<-1> on error.
=item long B(void);
This function returns a hex-value `0xIIII' which describes the
current B library version. I is the version, I the revisions,
I the level and I the type of the level (alphalevel=0, betalevel=1,
patchlevel=2, etc). For instance B version 1.0b1 is encoded as 0x100101.
The reason for this unusual mapping is that this way the version number is
steadily I. The same value is also available under compile time as
C.
=back
=head2 Thread Attribute Handling
Attribute objects are used in B for two things: First stand-alone/unbound
attribute objects are used to store attributes for to be spawned threads.
Bounded attribute objects are used to modify attributes of already existing
threads. The following attribute fields exists in attribute objects:
=over 4
=item C (read-write) [C]
Thread Priority between C and C.
The default is C.
=item C (read-write) [C]
Name of thread (up to 40 characters are stored only), mainly for debugging
purposes.
=item C (read-write) [C]
In bounded attribute objects, this field is incremented every time the
context is switched to the associated thread.
=item C (read-write> [C]
The thread detachment type, C indicates a joinable thread,
C indicates a detached thread. When a thread is detached,
after termination it is immediately kicked out of the system instead of
inserted into the dead queue.
=item C (read-write) [C]
The thread cancellation state, i.e., a combination of C or
C and C or
C.
=item C (read-write) [C]
The thread stack size in bytes. Use lower values than 64 KB with great care!
=item C (read-write) [C]
A pointer to the lower address of a chunk of malloc(3)'ed memory for the
stack.
=item C (read-only) [C]
The time when the thread was spawned.
This can be queried only when the attribute object is bound to a thread.
=item C (read-only) [C]
The time when the thread was last dispatched.
This can be queried only when the attribute object is bound to a thread.
=item C (read-only) [C]
The total time the thread was running.
This can be queried only when the attribute object is bound to a thread.
=item C (read-only) [C]
The thread start function.
This can be queried only when the attribute object is bound to a thread.
=item C (read-only) [C]
The thread start argument.
This can be queried only when the attribute object is bound to a thread.
=item C (read-only) [C]
The scheduling state of the thread, i.e., either C,
C, C, or C
This can be queried only when the attribute object is bound to a thread.
=item C (read-only) [C]
The event ring the thread is waiting for.
This can be queried only when the attribute object is bound to a thread.
=item C (read-only) [C]
Whether the attribute object is bound (C) to a thread or not (C).
=back
The following API functions can be used to handle the attribute objects:
=over 4
=item pth_attr_t B(pth_t I);
This returns a new attribute object I to thread I. Any queries on
this object directly fetch attributes from I. And attribute modifications
directly change I. Use such attribute objects to modify existing threads.
=item pth_attr_t B(void);
This returns a new I attribute object. An implicit pth_attr_init() is
done on it. Any queries on this object just fetch stored attributes from it.
And attribute modifications just change the stored attributes. Use such
attribute objects to pre-configure attributes for to be spawned threads.
=item int B(pth_attr_t I);
This initializes an attribute object I to the default values:
C := C, C := `C',
C := C<0>, C := C,
C := C,
C := 64*1024 and
C := C. All other C attributes are
read-only attributes and don't receive default values in I, because they
exists only for bounded attribute objects.
=item int B(pth_attr_t I, int I, ...);
This sets the attribute field I in I to a value
specified as an additional argument on the variable argument
list. The following attribute I and argument pairs can
be used:
PTH_ATTR_PRIO int
PTH_ATTR_NAME char *
PTH_ATTR_DISPATCHES int
PTH_ATTR_JOINABLE int
PTH_ATTR_CANCEL_STATE unsigned int
PTH_ATTR_STACK_SIZE unsigned int
PTH_ATTR_STACK_ADDR char *
=item int B(pth_attr_t I, int I, ...);
This retrieves the attribute field I in I and stores its
value in the variable specified through a pointer in an additional
argument on the variable argument list. The following I and
argument pairs can be used:
PTH_ATTR_PRIO int *
PTH_ATTR_NAME char **
PTH_ATTR_DISPATCHES int *
PTH_ATTR_JOINABLE int *
PTH_ATTR_CANCEL_STATE unsigned int *
PTH_ATTR_STACK_SIZE unsigned int *
PTH_ATTR_STACK_ADDR char **
PTH_ATTR_TIME_SPAWN pth_time_t *
PTH_ATTR_TIME_LAST pth_time_t *
PTH_ATTR_TIME_RAN pth_time_t *
PTH_ATTR_START_FUNC void *(**)(void *)
PTH_ATTR_START_ARG void **
PTH_ATTR_STATE pth_state_t *
PTH_ATTR_EVENTS pth_event_t *
PTH_ATTR_BOUND int *
=item int B(pth_attr_t I);
This destroys a attribute object I. After this I is no
longer a valid attribute object.
=back
=head2 Thread Control
The following functions control the threading itself and make up the main API
of the B library.
=over 4
=item pth_t B(pth_attr_t I, void *(*I