nam 0.55.766 |
PurposeA flexible realtime processor/sequencer for audio, MIDI and other signals. ConceptBuilding blocks of signal input/output and transformation, Plugins, are arranged in a signal flow Graph. Synchronous (audio) and asynchronous (control) signals are transmitted via connections set up between plugin ports (In, Out). |
|||
Contents | |||
|
|||
| |||
Front Matter | |||
This document presents nam, a realtime computer system for musical purposes. The project is discussed briefly in rationale, design and implementation. A reference of the Python programming interface is included that has been extracted from the docstrings of the original module. Please understand that what you are looking at is a snapshot of a project under development. Your input is wanted to improve both this document and the project it tries to describe in general. Newer versions can be obtained at http://quitte.de/nam.html.
|
|||
Introduction | |||
Il ne faut pas se résigner, il faut se révolter.
Albert Camus MotivationIt is now a few years that there was a little computer that was my ticket to a world completely different from the world we experience every day. When I should have been studying, I spent my time exploring this strange and beautiful space, and whatever I did outside, in here the sky was the limit.
The computer then was an Atari
As I journeyed on exploring my alternative home in all its magnificence, it occurred to me that my powers over it were not as subtle as I would have liked them to be. Being already acquainted with programming, I felt this to be one direction to follow. ConceptThe new system had to be universal, simple, transparent, maintainable; it had to allow virtually anything to connect, and it had to be cool about it, even if used by an intoxicated musician. The concept chosen is a 'signal flowchart' architecture, a processing graph that is populated by various interconnected plugins. There are plugins that produce signal streams, from sources such as a MIDI keyboard, a disk file, an AD converter, an algorithm, a network socket etc. Other plugins operate on input signals and produce a transformation, and eventually there are plugins that route signal streams on to outside the system, feeding disk files, DA converters, a MIDI interface etc. Any computer system striving for universality needs not only be modular, it also needs to have a programming interface. The demand for simplicity and friendliness made Python the programming language of choice for this task. Why on earth Yet Another Computer Music tool?Yes, there's a lot of packages out there that do similar things, and some of them have reached levels I can certainly only dream of. However they would not give me:
Why is it Free Software?Never heard of the revolution yet? : ) |
|||
Installation | |||
SoftwareYou will need these fine software packages:
Recommended addition:
HardwareA decent audio interface should be installed and the ALSA driver module operational. Some rare sound cards exist that will not work with this library without either the card or the package undergoing a major redesign. If your sound interface works with ALSA but not with the AlsaClock plugin after trying all sensible frames_per_cycle settings, please send a bug report. MIDI communication is 'raw' and should work with anything that speaks the protocol and is not a block device file. The ALSA sequencer interface is also supported (without timing information or SysEx). PortingThis package has been developed and designed to run on Linux, i386. It should compile and run on other Linux ports without modifications but don't expect it to do so right away and read on. Ports to other unices should be straightforward, especially if they do provide a usable pthread implementation. Nevertheless, I recommend downgrading your OS to Linux, just as I always do. The implementation works on the premise that the target architecture provides 'real' atomic integer and pointer data types, ie. these types must exist and work without the protection of a spinlock. This may rule out ports to some processor families. |
|||
| |||
Download | |||
This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. The source package is here: nam-0.55.766.tar.gz. This is a first, pre-pre-pre-alpha release, and some things may not work the way one would expect them to. Bug reports are welcome. Compiling
Unpack the source tarball and change to the source
directory:
Configure the package: Please understand that the configure script is not very bright and mostly trusts you have the right versions of prerequisite packages installed. In particular, it may not detect an aged ALSA version.
Build it: First Steps
After building the Python extension module, you can test it
using the applications from the examples directory (please
read the section below first). For
example,
Other examples are supplied to show how more meaningful applications can be built. Hack away and have fun. If you haven't done so already, please read the notes section below about the 'stdmsg' logfile, low-latency signal processing requirements and other points of importance before installing.
To make the module available outside the source directory,
issue (as root) | |||
| |||
Known Bugs | |||
Possible caveats:
'Features': Bug reports can be sent to tim@quitte.de. |
|||
| |||
Design and Implementation | |||
Processing CyclesCycles, Threads, Plugin OrderSignal processing and propagation is carried out in cycles. There is a cycle dedicated to synchronous audio processing, and for the scheduling of asynchronous signals there is a low-latency, high-frequency cycle (the 'Time' cycle, 1024 Hz default) and a high-latency, low-frequency cycle (dubbed 'Beat', currently cycling at eighth note intervals). All processing cycles are run in dedicated threads of execution. Plugins that act as cycle schedulers are responsible for managing these threads; the 'Beat' cycle thread is run by the graph. If possible, appropriate scheduling priorities are assigned. A plugin can take part in any combination of cycles. The sequence of plugin invocations in the processing cycles is determined by the graph following their layout in a two-dimensional space. Other processing cycles, The Python Lock Besides their work in the cycles already mentioned, some plugins occasionally need to execute code without realtime constraints. For example a DiskStream plugin calculates it needs to read the next few seconds of audio data from a disk file. It will notify a DiskScheduler during its turn in the audio cycle. The scheduler thread is woken, and a disk processing cycle begins. The execution of Python code is serialized by a mutual-exclusive lock; waiting for this mutex to be released is not an option under 'hard' realtime constraints. Therefore, a ScriptScheduler thread will be woken when a ScriptPlugin receives data. A script cycle is triggered, and the plugin is able to process. Thus, the Python lock, and the free use of dynamically allocated memory by the Python interpreter make script plugins a source of potentially very high latency, limiting them to 'soft' realtime purposes. Signal TravelMIDI and other asynchronous signal data is transported by Events, synchronous audio by a dedicated type of AudioEvents (CyclicAudio). Event travel between an output port Out and an input port In is implemented as a lock-free first-in-first-out operation to allow different threads of processing across the connection. CachingA second fifo attached to every output port serves as an event cache. Besides guaranteeing that no memory allocation needs to take place under realtime constraints, this also means that events can be modified by plugins processing the output signal without touching longer-lived stored data (SysexEvents, when played through a Part, are an exception to this rule. They are sent 'as-is'). When en event has completed its journey through the graph, it is passed into another fifo, making its passage to a mediator thread. The mediator thread is responsible for the disposal of all objects, cached or uncached. Cyclic audio events, being used only within the audio thread, are immediately returned to the port they originate from. Multiple ConnectionsInput ports usually support multiple connections, yielding a composite signal: audio signal is mixed, and event streams interleaved in timestamp order. TimeAlthough being of floating point type, the timestamp on events relates to a tick-based system much like that common in MIDI applications. This is to make it easier to interpret and modify event timing in the context of musical time and tempo, and to allow realtime tempo changes in conjunction with prequeued events. Plugins that care about audio signal time evaluate the 'frame' timestamp on AudioEvents that relates to the master sample clock. The graph maintains a set of clocks reflecting the current 'tick' (musical time), 'time' (seconds) and 'frame' (sample clock). If an AudioClock plugin is present, all clocks are synchronized to the sample clock. C++ and PythonA major goal of the implementation has been to integrate the two languages as closely as possible. The type architecture of Python 2.2, in combination with C++ templates, has made it possible to implement a C++ class family whose members are legal Python citizens. Another aim has been to hide the complexity of a multi-threaded core from the Python API wherever possible. For example, modifications to the graph state (adding/removing a plugin, seeking etc) must be synchronized with the audio thread in order to prevent glitches or dropouts. The (quite delicate) mechanism that ensures this is working 'invisibly': Python code sees these actions are mere method calls or property assignments. Divide et ... The benefits of the unification of Python and C++ classes are numerous: it is possible to subclass any of these types with Python classes, the two namespaces match very well, and the amount of code necessary to reconcile the compiled and the interpreted language can be distilled to a bare minimum. Because this Python API is considered their 'public' interface, the C++ structures can be coded in a very functional style. This approach has already proved beneficial to the development process by enabling the author to easily set up and closely examine debugging scenarios, without the influence common debugging tools exert on a program running multi-threaded. As a consequence of the far-going namespace symmetry, the information presented in the reference section of this document, while being extracted from the Python interface, also documents the structure and workings of the underlying C++ fairly accurately. I'd rather like to use this library without Python attached. Yeah, sure. Send me a mail. StatusThe project currently manifests in a Python extension module, libmandala.so. This module is intended to serve as a construction kit for realtime signal processors and sequencers. The Python API is not considered stable yet. |
|||
Notes | |||
The 'stdmsg' File
If this option has not been turned off at configuration time, the
libmandala module will open a file called
'stdmsg' in the current
working directory at the time it is imported. It serves as a logfile
for background threads. Xruns (see below) are
reported here.
Processing LatencySuppose we wanted to build a realtime audio effects processor using this package. Such an application has an all-important property, that is the time it takes the signal to travel through the system, from entrance to exit.
For playing an instrument through the processor, we want this delay to
be as short as possible. The
parameter that controls it is (along with the sample rate)
the audio cycle time, ie. the number of samples the processor will
read from and write to the audio device
at a time (general-purpose computer systems could not keep
the pace of the sample rate were they to process one sample at a time).
At this setting, our application will read 64 samples at every audio cycle. This signal, when read from the input, can be thought to already be 1.5 ms old. The processor applies effects magic to the chunk and sends it to an audio output. This cannot be accomplished in 0 seconds, and so another audio cycle and another 1.5 ms pass before the signal reaches the output. The total processing latency is now a little less than 3 milliseconds, not taking into account the latency AD and DA converters may introduce. Thus, the application needs to complete every audio cycle in less than 1.5 ms, and to do so steadily every 1.5 ms. This can only be accomplished if the thread the audio cycle is run in preempts all other threads on the system, a privilege only granted to processes run by the superuser (or granted this capability by the superuser, a feature not commonly compiled into the kernel). Even with adjusted scheduling priority, a stock linux kernel does not support the low scheduling latency needed. When the audio interface signals data is available, an audio cycle needs to start right away. An unmodified linux kernel will sometimes, especially when under severe load, take generous amounts of time until the audio thread is scheduled to run. This behaviour is improved vastly by the aforementioned kernel patches. XrunsThe sample buffers used with audio hardware are ringbuffers. This means that if an application doesn't manage to read or write quickly enough to keep up with the pace the sampling rate sets, data is lost. The read case 'overrun' and the write case 'underrun' are collectively dubbed 'xrun'. Whoami?As discussed above, for low-latency processing the audio thread needs root rights to run at enhanced scheduling priority. Another requirement for realtime operation is that the process' memory is never paged out, otherwise the response to a signal may be delayed due to being forced to wait for its memory to come back. To turn off paging is also a root privilege. For the high-speed pulse required to precisely schedule MIDI events, the device file /dev/rtc is used, which will refuse pulse frequencies above 64 Hz to anyone but the superuser. rootWhat all this boils down to is that for ultimate responsiveness and timing precision a realtime application must be run as root. Running a Python script as root is something many computer literates will call a gaping security hole. System ClockThe RTClock plugin and other timing mechanisms in this package rely on the gettimeofday system call. Changing the system time while working with the package may confuse the clock plugin and cause a hiccup. Source CodeAll source text is formatted for a tabsize of two spaces. Writing New PluginsThe VCF plugin source code (file VCF.cc) provides an example of an audio plugin that is fairly simple, yet also fairly complete. External plugins, ie. new types deriving from any of the types in libmandala.so that come in their own module, are possible. Caveat One: the plugin module must contain a module initialization function like initlibmandala(). Caveat Two: C++ exceptions thrown in a dynamically loaded module are not caught by the libmandala module (at least not if the package and the new module have been compiled by gcc 2.95.4). Final WordIt's not the power you have, it's the use you put it to. |
|||
|
Reference | |||
Object + AudioSignal | + CyclicAudio | + DecycledAudio + Event | + FloatEvent | + MidiEvent | | + Note | | + SysexEvent | + TempoEvent | + TextEvent | + TimeEvent + NamedObject | + EventList | + FFT | + GUSVoice | + Granular | + Graph | + In | | + LadspaIn | + OSS | + ObjectFifo | | + EventCache | | + Out | | + LadspaOut | + Plugin | | + AlsaMidi | | | + AlsaMidiIn | | | + AlsaMidiOut | | + AlsaStream | | | + AlsaIn | | | + AlsaOut | | + AudioClock | | | + AlsaClock | | | + Jack | | + BoundNoise | | + Chebyshev | | + Decycler | | + Delay | | + DiskStream | | + FIR | | + Gain | | | + ADSR | | | + StereoGain | | + IIR | | + IIWU | | + Inverter | | + JackIn | | + JackOut | | + LFO | | + LadspaPlugin | | + Lorenz | | + MidiIn | | + MidiOut | | + MidiPlexer | | + Modulator | | + Noise | | + Part | | + Pulse | | + RTClock | | + RandomSVF | | + Recycler | | + Roessler | | + SamplePlayer | | + Scheduler | | | + DiskScheduler | | | + ScriptScheduler | | + ScriptPlugin | | + Sine | | + Spice | | + SweepingSVF | | + VCF | | + VCO | + SoundFile | | + MpegFile | | + OggFile | | + WaveFile | + TimeMap + Resampler + SampledVoice |
ADSR AlsaClock AlsaIn AlsaMidi AlsaMidiIn AlsaMidiOut AlsaOut AlsaStream AudioClock AudioSignal BoundNoise Chebyshev CyclicAudio DecycledAudio Decycler Delay DiskScheduler DiskStream Event EventCache EventList FFT FIR FloatEvent GUSVoice Gain Granular Graph IIR IIWU In Inverter Jack JackIn JackOut LFO LadspaIn LadspaOut LadspaPlugin Lorenz MidiEvent MidiIn MidiOut MidiPlexer Modulator MpegFile NamedObject Noise Note OSS Object ObjectFifo OggFile Out Part Plugin Pulse RTClock RandomSVF Recycler Resampler Roessler SamplePlayer SampledVoice Scheduler ScriptPlugin ScriptScheduler Sine SoundFile Spice StereoGain SweepingSVF SysexEvent TempoEvent TextEvent TimeEvent TimeMap VCF VCO WaveFile | ||
A | |||
ADSR | Index | ||
Gain, Plugin, NamedObject | |||
a simple Attack-Decay-Sustain-Release based audio gain. The ADSR responds to MidiEvents, which are routed through. The amplification factor ramps linearly (= gain decays exponentially) during all phases except attack, where a square mapping combined with linear interpolation is used. In addition to being applied to an audio stream routed through the plugin, the current factor is also sent on a third output port. |
|||
ADSR ([a = 0.02, d = 0.1, s = 0.7, sr = 24.0, r = 0.1]) |
|||
a |
'attack' is the time until full amplification.
| ||
d |
'decay' is the time from full gain until sustain level.
| ||
r |
'release' is the time from note off to zero amplification.
| ||
s |
'sustain' is the amplification factor at which the decay phase ends.
| ||
sr |
'sustain rate' is the time until sustain amplification is halved.
| ||
db, do_peak, factor, peak (Gain), cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
AlsaClock | Index | ||
AudioClock, Plugin, NamedObject | |||
an AudioClock that drives cyclic audio processing using AlsaStreams. the 'frames_per_cycle' parameter should be a power of two for most sound cards; USB 1.x devices will need a multiple of sample_rate / 1000, rounded up to the nearest integer instead afaik (untested with USB). Add this to the graph, then the ALSA stream(s), then set the graph state to 1 to configure the streams and make them instantiate their ports. Xruns are reported on the 'stdmsg' logfile. If more than one AlsaStream is present in the graph, they are linked to run in sync. Although ALSA allows it, it is usually not a good idea to try and run different hardware audio interfaces on one AlsaClock; they will soon start to drift because their hardware sample word clocks are not in sync if you haven't taken special precautions (SP/DIF or ADAT sync, or a soldering iron). The Midiman Delta series audio cards are claimed to work on a common sample clock. |
|||
AlsaClock ([sample_rate = 44100, frames_per_cycle = 2048, buffer_cycles = 2]) |
|||
frames_per_cycle |
the number of audio frames to process per cycle. Set this before starting the graph.
| ||
max_xrun_cycles |
the clock stops audio processing when an xrun happens twice within 'max_xrun_cycles' cycles of processing, default is 4. Set this to -1 to disable this feature and have the clock run on no matter what happens (not recommended).
| ||
priority |
the scheduling priority the clock thread has.
| ||
sample_rate |
the sample rate to use. Set this before starting the graph.
| ||
show_stats |
show statistics every 'show_stats' processing cycles. They are printed on the 'stdmsg' logfile.
| ||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
AlsaIn | Index | ||
AlsaStream, Plugin, NamedObject | |||
receives multi-channel audio signal from an ALSA device. Disable 'relaxed' to have the driver stop if an xrun happens, ie. if you cannot afford even a single sample overrun. Plugins of this type don't work without an AlsaClock in the same graph. Alsa streams have only been tested with hardware devices (device = 'hw:0,0') so far. |
|||
AlsaIn (device[, relaxed = 1]) |
|||
get_gain, relaxed, set_gain (AlsaStream), cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
AlsaMidi | Index | ||
Plugin, NamedObject | |||
AlsaMidiIn AlsaMidiOut |
abstract base type for ALSA sequencer MIDI ports. |
||
client |
the ALSA sequencer client ID, major part (minor is 0).
| ||
subscribe() |
subscribe (major, minor) -- connect to ALSA sequencer client 'major:minor'.
| ||
unsubscribe() |
unsubscribe (major, minor) -- disconnect from ALSA sequencer client 'major:minor'.
| ||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
AlsaMidiIn | Index | ||
AlsaMidi, Plugin, NamedObject | |||
an ALSA sequencer client that streams in MIDI. It cannot receive system-exclusive data (Sysex). |
|||
AlsaMidiIn ('name') |
|||
client, subscribe, unsubscribe (AlsaMidi), cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
AlsaMidiOut | Index | ||
AlsaMidi, Plugin, NamedObject | |||
an ALSA sequencer client that streams out MIDI. It cannot send system-exclusive (Sysex) events. Neither does it send MIDI clock. |
|||
AlsaMidiOut ('name') |
|||
send() |
send (event) -- send a MIDI event right now.
| ||
client, subscribe, unsubscribe (AlsaMidi), cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
AlsaOut | Index | ||
AlsaStream, Plugin, NamedObject | |||
sends multi-channel audio signal to an ALSA device. Disable 'relaxed' to have the driver stop if an xrun happens, ie. if you cannot afford even a single sample underrun. Plugins of this type don't work without an AlsaClock in the same graph. Alsa streams have only been tested with hardware devices (device = 'hw:0,0') so far. |
|||
AlsaOut (device[, relaxed = 1]) |
|||
get_gain, relaxed, set_gain (AlsaStream), cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
AlsaStream | Index | ||
Plugin, NamedObject | |||
AlsaIn AlsaOut |
abstract base type for audio input and output streams managed by ALSA sound drivers. AlsaStreams (~In, ~Out) are configured when a Graph is set to 'Prepared' state. They always ask the device for the maximum number of channels available. After configuring, every channel is assigned an In or Out depending on signal flow direction. Plugins of this type don't work without an AlsaClock in the same graph. You cannot instantiate this in Python, use AlsaIn and ~Out instead. |
||
get_gain() |
get_gain (channel) -- returns the gain on 'channel' (1 is unity).
| ||
relaxed |
If not relaxed, the driver will stop if an 'xrun' occurs. Is only evaluated when the stream starts, ie. set this first if you can't afford the slightest audio glitch.
| ||
set_gain() |
set_gain (channel, gain) -- sets the gain on 'channel' (1 is unity).
| ||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
AudioClock | Index | ||
Plugin, NamedObject | |||
AlsaClock Jack |
abstract base type for audio clocks. These plugins schedule audio processing cycles in a graph. Most likely, an AudioClock will run or depend on a realtime thread that communicates with a sound device. |
||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
AudioSignal | Index | ||
Object | |||
CyclicAudio DecycledAudio |
a sequence of sampled audio signal. Supports basic principles of the sequence protocol [] as well as some inplace arithmetic. |
||
AudioSignal (size) |
|||
at() |
at (index) -- get the sample at index, which can be a floating point number. uses cubic interpolation.
| ||
blackman() |
blackman() -- apply a Blackman window to the signal samples (inplace).
| ||
blackman_harris() |
blackman_harris() -- apply a Blackman-Harris window to the signal samples (inplace).
| ||
copy() |
copy ([signal]) -- copy the signal data from 'signal', or, if no 'signal' is provided, return a copy.
| ||
diff() |
diff (signal) -- sample-wise compare, returns the square root of the sum of all squared per-sample differences.
| ||
downsample() |
downsample (n) -- returns a copy of the signal, downsampled by factor 'n', which must be an integer. the signal is simply decimated.
| ||
envelope() |
envelope ((index, gain), (index, gain), ...) -- applies a volume envelope consisting of linear amplitude fades (inplace).
| ||
filter() |
filter (method) -- iterate over all samples, replacing them with the return value of 'method (self[i])'.
| ||
find_0_crossing() |
find_0_crossing ([start = 0, sign = -1]) -- returns a (fractional) sample index where the waveform crosses 0. the search starts at sample index 'start' and matches only 0 crossings coming from 'sign', which defaults to -1, meaning that transitions from the lower to the upper lobe are detected.
| ||
frame |
the frame (point in time, measured in samples).
| ||
grain() |
grain (frame, length, fade) -- return the sub-signal from 'frame' to 'frame' + 'length', faded in and out over 'fade' samples.
| ||
hanning() |
hanning() -- apply a Hanning window to the signal samples (inplace).
| ||
kaiser() |
kaiser ([beta = 1]) -- apply a Kaiser window to the signal samples (inplace).
| ||
normalize() |
normalize() -- scale the signal so that the peak sample value is +-1.
| ||
paste() |
paste (signal, frame[, gain = 1.]) -- add 'signal' at 'frame' with 'gain'. 'frame' can be a negative number.
| ||
peak() |
peak() -- return the peak amplitude of the signal.
| ||
pisarenko1() |
pisarenko1 ([start, end]) -- performs Pisarenko Harmonic Decomposition for a single sinusoid. multiply the returned value with the sample rate to get the frequency in Hz. Estimates get more precise with larger sample sizes and better S/N ratios.
| ||
ramp() |
ramp (s0, s1) -- sample a linear ramp from 's0' to 's1'.
| ||
rectify() |
rectify() -- replace all samples with -1 or 1 depending on their sign (inplace).
| ||
rms() |
rms() -- calculate root-mean-square.
| ||
sample() |
sample (method) -- sample the return value of 'method (x)' over a range [0 .. 1[.
| ||
set_data() |
set_data (data) -- set signal to a copy of native-endian, 16-bit signed string 'data'. this is a hack you should not use.
| ||
silence() |
silence() -- set all samples to zero.
| ||
sinc() |
sinc (omega) -- samples the sinc function | ||
sine() |
sine (cycles[, phase = 0]) -- sample a variable number of cycles of sin(), starting at 'phase'.
| ||
upsample() |
upsample (n) -- returns a copy of the signal, upsampled by factor 'n', which must be an integer. the signal is simply interleaved with 0s.
| ||
xfade() |
xfade (other signal) -- merge with another signal, crossfading for constant signal energy.
| ||
xfade_into() |
xfade_into (other, my index, his index ) -- returns a new audio signal composed of 'self' and 'other'. the source signals are cross-faded so that at 'my index', 'self' is inaudible, and 'other' is at full gain.
| ||
xfade_loop() |
xfade_loop (start, end) -- blends the slice [start:end] with the slice [start - (end - start):start] so that start:end form a smooth loop (inplace).
| ||
| |||
B | |||
BoundNoise | Index | ||
Plugin, NamedObject | |||
filtered and ring-modulated noisz. |
|||
BoundNoise ([f = .01]) |
|||
f |
frequency, > 0 and < 0.026 are stable.
| ||
one_shot() |
one_shot (signal) -- sample the fractal into 'signal'.
| ||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
C | |||
Chebyshev | Index | ||
Plugin, NamedObject | |||
adds harmonics using a Chebyshev polynomial. you can specify either 'h' -- a list of harmonic amplitudes, or 'c' -- list of polynomial coefficients. |
|||
Chebyshev ([h = (0, 1,), c]) |
|||
c |
tuple of polynomial coefficients.
| ||
h |
tuple of harmonic amplitudes.
| ||
one_shot() |
one_shot (signal) -- treat an AudioSignal.
| ||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
CyclicAudio | Index | ||
AudioSignal | |||
sampled audio signal as used in the audio cycle. You cannot instantiate this in Python. |
|||
at, blackman, blackman_harris, copy, diff, downsample, envelope, filter, find_0_crossing, frame, grain, hanning, kaiser, normalize, paste, peak, pisarenko1, ramp, rectify, rms, sample, set_data, silence, sinc, sine, upsample, xfade, xfade_into, xfade_loop (AudioSignal) | |||
| |||
D | |||
DecycledAudio | Index | ||
AudioSignal | |||
sequence of sampled audio signal as produced by a Decycler plugin. |
|||
at, blackman, blackman_harris, copy, diff, downsample, envelope, filter, find_0_crossing, frame, grain, hanning, kaiser, normalize, paste, peak, pisarenko1, ramp, rectify, rms, sample, set_data, silence, sinc, sine, upsample, xfade, xfade_into, xfade_loop (AudioSignal) | |||
| |||
Decycler | Index | ||
Plugin, NamedObject | |||
collects realtime cyclic audio signal into DecycledAudio. It can be used to 'decouple' an audio signal from cyclic low-latency audio processing; the resulting stream of DecycledAudio can then be processed by a ScriptPlugin. A Recycler plugin can be used to feed the DecycledAudio stream back into the audio processing cycle. The Decycler produces a stream of silent DecycledAudio if its input is not connected. |
|||
Decycler (collect) |
|||
collect |
number of audio samples to collect into one DecycledAudio.
| ||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
Delay | Index | ||
Plugin, NamedObject | |||
Delay. |
|||
Delay ([t = .5, wet = .5]) |
|||
fb |
feedback amount, from [0 .. 1].
| ||
t |
delay time, from [0 .. 1].
| ||
wet |
dry/wet ratio, from [0 .. 1].
| ||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
DiskScheduler | Index | ||
Scheduler, Plugin, NamedObject | |||
a plugin that is needed for DiskPlugins to operate, see the Scheduler base type for the details. |
|||
DiskScheduler() |
|||
priority (Scheduler), cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
DiskStream | Index | ||
Plugin, NamedObject | |||
recording and playback plugin for sound files. Pass a SoundFile derivative (currently WaveFile or MpegFile) to the constructor. If the file has been opened for write access, the input signal will be recorded into that file until the graph is stopped, a seek is performed or the end of a loop is hit. Afterwards, the recording will be played back and the input signal dumped. This plugin depends on the presence of a DiskScheduler in the graph it resides in to work as expected. |
|||
DiskStream (file) |
|||
file |
the sound file.
| ||
mode |
the mode (read/write).
| ||
monitor |
whether to copy the recorded signal to the plugin's outputs.
| ||
to_play_on_loop |
whether to switch from record to play in looped time.
| ||
to_play_on_seek |
whether to switch from record to play when jumping in time.
| ||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
E | |||
Event | Index | ||
Object | |||
FloatEvent MidiEvent TempoEvent TextEvent TimeEvent |
an Object with a musical timestamp. |
||
Event (tick[, loop]) |
|||
loop |
'loop' stores the loop cycle number, needed for accurate event timing around the loop points.
| ||
tick |
'tick' is a timestamp in musical time.
| ||
| |||
EventCache | Index | ||
ObjectFifo, NamedObject | |||
a cache holding preallocated Events. Every 'Out' has a cache holding events of the type the output is registered with. Events fetched from an event cache should be fed to the 'Out' the cache belongs to. For every event you want to transmit, you should fetch a new event from the cache, ie. don't push the same event twice. NB: Failing to observe this can cause a core dump. You cannot instantiate this in Python. |
|||
have, pop, push, size (ObjectFifo), name (NamedObject) | |||
| |||
EventList | Index | ||
NamedObject | |||
a list of a certain type of Events, sorted by tick. |
|||
EventList (type) |
|||
add() |
add (event[, t]) -- add a copy of 'event' at optional tick 't'.
| ||
allow_doubles |
if 0, don't add double notes (same tick, same note).
| ||
color |
the color to use when drawing the events from this list.
| ||
index_of_tick() |
index_of_tick (tick) -- returns the list insertion point for 'tick'.
| ||
insert() |
insert (event[, t]) -- insert 'event' at optional tick 't'. Note that this doesn't insert a copy, use EventList.add() for that.
| ||
lock() |
lock() -- by acquiring this lock, the part is protected from other threads' attempts at reading or modifying its contents.
| ||
quiet |
if 0, print warnings about ignored notes and note-offs to stdmsg.
| ||
remove() |
remove (event) -- remove 'event'.
| ||
smf_track() |
smf_track() -- returns events encoded as a Standard MIDI File track.
| ||
type |
the type of events accepted.
| ||
unlock() |
unlock() -- releases the protecting lock.
| ||
name (NamedObject) | |||
| |||
F | |||
FFT | Index | ||
NamedObject | |||
Jean-Baptiste Joseph Fourier's transform. Code by fftw.org. Only does real to real transforms. |
|||
FFT (size) |
|||
eno() |
eno (fft'd signal) -- apply the inverse FT to an audio signal. if the f-domain data doesn't match the FFT's size, it is truncated or zero-padded to fit, meaning the signal is resampled.
| ||
one() |
one (signal[, start = 0]) -- compute the FT of an audio signal. returns a new signal, scaled to sqrt (FFT size) / 2; for the layout of the returned data please consult the FFTW documentation.
| ||
power() |
power (fft'd signal[, logit = 0]) -- return the power spectrum of frequency domain data, normalized to [0 .. 1] or in log scale.
| ||
size |
size, in samples, of the FFT.
| ||
name (NamedObject) | |||
| |||
FIR | Index | ||
Plugin, NamedObject | |||
Finite Impulse Response filter. The filter supports kernels up to 128 coefficients; the default being an identity set of size 1. The implementation is brute force; therefore its use is confined to mainly academical exercises. |
|||
FIR ([c = (1,)]) |
|||
c |
c, tuple containing the filter kernel coefficients.
| ||
downsample() |
downsample (audio signal, ratio) -- downsample the signal by 'ratio', using this filter for the decimation.
| ||
get_kernel() |
get_kernel() -- return the filter coefficients as an AudioSignal.
| ||
one_shot() |
one_shot (audio signal) -- process a signal, starting from a filter history of zeros. (in-place)
| ||
upsample() |
upsample (audio signal, ratio) -- upsample the signal by 'ratio', using this filter for the interpolation.
| ||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
FloatEvent | Index | ||
Event | |||
an Event that carries a floating point value. |
|||
FloatEvent (value[, tick]) |
|||
value |
.
| ||
loop, tick (Event) | |||
| |||
G | |||
GUSVoice | Index | ||
NamedObject | |||
one voice from a GUS patch file, returned by load_gus_patch(). |
|||
env_offsets |
envelope offsets, 6 bytes: attack, decay, sustain, 3 * release.
| ||
env_rates |
envelope rates, 6 bytes: attack, decay, sustain, 3 * release.
| ||
freq |
(low, high, root) frequencies.
| ||
loop |
(start, end) sample offsets.
| ||
mode |
mode bits.
| ||
pan |
pan position, from [0 .. 127].
| ||
sample |
the sample data, an AudioSignal.
| ||
sample_rate |
the sample rate, int.
| ||
name (NamedObject) | |||
| |||
Gain | Index | ||
Plugin, NamedObject | |||
ADSR StereoGain |
an audio plugin that applies amplification and optionally calculates the peak signal value after applying gain. |
||
Gain ([db = 0.0, n_outs = 1, do_peak = 1]) |
|||
db |
the amplification factor expressed as gain, in dB.
| ||
do_peak |
whether peak values are calculated.
| ||
factor |
the amplification factor; 1.0 is unity. The factor value is computed from a dB gain value like this: def db2factor (db): return pow (10., db / 20.)
| ||
peak |
the signal peak value so far. The value is reset to 0 upon reading.
| ||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
Granular | Index | ||
NamedObject | |||
granular oscillator. |
|||
Granular (audio signal[, gain = 1]) |
|||
body |
envelope body length, s.
| ||
fade |
envelope fadein and ~out length, s.
| ||
gain |
linear gain control.
| ||
gain_jitter |
gain jitter control (max. attenuation).
| ||
jitter |
sample positional jitter, [0 .. 1].
| ||
one_shot() |
one_shot (signal) -- in-place process an AudioSignal.
| ||
step |
time between two consecutiv grains, s.
| ||
name (NamedObject) | |||
| |||
Graph | Index | ||
NamedObject | |||
central to all signal processing. Maintains a list of plugins. It also keeps account of the passage of time in three ways: ticks (musical time), seconds ('real time') and audio frames (samples). |
|||
Graph (quarter = 192) |
|||
add() |
add (plugin[, y]|[, x, y]) -- add 'plugin' to the graph at the position denoted by ('x', 'y'). returns the plugin for convenience.
| ||
connect() |
connect (out, in) -- connect a plugin output to a plugin input.
| ||
cpu |
maximum relative amount of cpu usage for handling an audio cycle.
| ||
disconnect() |
disconnect (out) -- disconnect a plugin output.
| ||
frame |
the current audio frame.
| ||
frame2tick() |
frame2tick (frame) -- translate 'frame' into ticks.
| ||
frame2time() |
frame2time (frame) -- translate an audio frame position into time (in seconds).
| ||
frames_per_cycle |
the number of audio frames to process per cycle. Usually set by the audio clock in the graph.
| ||
loop |
a tuple of the start and end points of the loop, in ticks.
| ||
move() |
move (plugin, x, y) -- move 'plugin' to ('x', 'y'). Because this operation changes the processing order, it may produce unexpected silence.
| ||
plugins |
a tuple containing all plugins registered with this graph, sorted by processing order.
| ||
quarter |
the (integer) number of ticks per quarter note.
| ||
remove() |
remove (plugin) -- remove 'plugin' from the graph.
| ||
remove_tempo() |
remove_tempo (tick[, n = 1]) -- remove 'n' tempo changes starting at 'tick'.
| ||
sample_rate |
the sample rate to use. Usually set by the audio clock in the graph.
| ||
state |
The state of the Graph can be one of the following:
0 -- all clocks are stopped. No processing of any kind takes place. 1 -- everything is prepared for rolling, but otherwise the same as 0. 2 -- processing takes place but no playback or recording, clocks are not advanced. 3 -- clocks are running, and all processing is enabled. These states correspond directly to the four constants defined in Plugin. | ||
store_tempo() |
store_tempo (bpm, tick) -- store a change of tempo to 'bpm' at 'tick'.
| ||
tempo_map |
a tuple containing all events from the graph's tempo map.
| ||
tick |
the current tick.
| ||
tick2frame() |
tick2frame (tick) -- translate a tick into a frame position.
| ||
tick2time() |
tick2time (tick) -- translate a tick into time (in seconds).
| ||
time |
the current song time.
| ||
time2frame() |
time2frame (time) -- translate 'time' into an audio frame position.
| ||
time2tick() |
time2tick (time) -- translate 'time' into ticks.
| ||
time_as_string() |
time_as_string() -- return the current position as 'h:mm:ss.cc'.
| ||
time_map |
the TimeMap.
| ||
use_loop |
whether the loop mechanism is active or not.
| ||
virtual_tick |
the current virtual tick (counted when state is Activated).
| ||
virtual_time |
the current virtual song time (counted when state is Activated).
| ||
name (NamedObject) | |||
| |||
I | |||
IIR | Index | ||
Plugin, NamedObject | |||
Infinite Impulse Response filter. The filter supports up to 16 recursion coefficients; the default being an identity set. The biquad variation (three 'a' coefficients, two 'b') is special-cased for better performance. |
|||
IIR ([ab = ((1,), (0,))]) |
|||
ab |
ab, tuple of (a, b) where 'a' and 'b' are tuples themselves, containing the recursion coefficients. 'b[0]' is ignored.
| ||
bp() |
bp (fc, Q) -- initialize filter coefficients for a band-pass.
| ||
cheby_lp() |
cheby_lp (f, ripple, order[, alpha = .05]) -- compute coefficients for a Chebyshev lo-pass (or a Butterworth if ripple is 0). NB the filter gain probably needs scaling before the filter is useful, for example like so:
iir.scale (1 / iir.dc_gain()). | ||
dc_gain() |
dc_gain() -- return the gain (linear) at DC.
| ||
from_splane() |
from_splane (poles, zeros) -- compute from poles and zeros in the s-plane; 'poles' and 'zeros' are tuples of complex numbers.
| ||
from_zplane() |
from_zplane (poles, zeros) -- compute from poles and zeros in the z-plane; 'poles' and 'zeros' are tuples of complex numbers.
| ||
nyquist_gain() |
nyquist_gain() -- return the gain (linear) at Nyquist.
| ||
one_shot() |
one_shot (audio event) -- process the contents of a single audio event, starting from a filter history of zeroes. the audio signal is processed in-place.
| ||
rbj_bp() |
rbj_bp (fc, Q) -- initialize filter coefficients for a band-pass bi-quad according to Robert Bristow-Johnson.
| ||
rbj_hi_shelve() |
rbj_hi_shelve (fc, Q, dB) -- initialize filter coefficients for a hi-shelve bi-quad according to Robert Bristow-Johnson.
| ||
rbj_hp() |
rbj_hp (fc, Q) -- initialize filter coefficients for a hi-pass bi-quad according to Robert Bristow-Johnson.
| ||
rbj_lo_shelve() |
rbj_lo_shelve (fc, Q, dB) -- initialize filter coefficients for a lo-shelve bi-quad according to Robert Bristow-Johnson.
| ||
rbj_lp() |
rbj_lp (fc, Q) -- initialize filter coefficients for a lo-pass bi-quad according to Robert Bristow-Johnson.
| ||
rbj_notch() |
rbj_notch (fc, Q) -- initialize filter coefficients for a notch bi-quad according to Robert Bristow-Johnson.
| ||
rbj_peaking() |
rbj_peaking (fc, Q, dB) -- initialize filter coefficients for a peaking eq bi-quad according to Robert Bristow-Johnson.
| ||
scale() |
scale (factor) -- scale the filter gain by a linear 'factor'.
| ||
zoelzer_lp() |
zoelzer_lp (fc, Q) -- initialize filter coefficients for a lo-pass bi-quad according to Udo Zoelzer.
| ||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
IIWU | Index | ||
Plugin, NamedObject | |||
a plugin that wraps the 'If I Were You' wavetable synthesizer. The 'voices' setting determines the maximum number of concurrently sounding voices and thereby the maximum CPU usage. 'path' should point to a soundfont file in SF2 format. 'flags' is a combination of the constants found in the type dict. |
|||
IIWU (voices[, path = None, flags = 0]) |
|||
add_sf2() |
add_sf2 (path) -- add a soundfont to the fluid sf2 stack.
| ||
cmdline() |
cmdline ('fluid command') -- issue an fluid command, returns a string reply.
| ||
gain |
the output signal is multiplied by this value.
| ||
Chorus | 2 | ||
Reverb | 1 | ||
Verbose | 4 | ||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
In | Index | ||
NamedObject | |||
LadspaIn |
an input connection point on a plugin, used to receive signal data. You cannot instantiate this in Python; only ScriptPlugins have a new_in() method. |
||
connected |
a tuple of all Out ports connected to this port.
| ||
n_slots |
the max. number of connections supported.
| ||
peek() |
peek() -- return the next event at this input point, without removing it from the queue.
| ||
plugin |
the Plugin the port is part of.
| ||
pop() |
pop() -- return the next event at this input point, and remove it from the queue.
| ||
type |
the type of events accepted.
| ||
name (NamedObject) | |||
| |||
Inverter | Index | ||
Plugin, NamedObject | |||
Audio plugin that inverts the signal processed. |
|||
Inverter() |
|||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
J | |||
Jack | Index | ||
AudioClock, Plugin, NamedObject | |||
an AudioClock that connects to the Jack audio server, identifying itself as 'name'. After creating the Jack plugin, you may want to create JackIn and/or JackOut plugins that stream audio signal in from or out into the Jack graph. |
|||
Jack (name) |
|||
connect() |
connect ('source', 'dest') -- asks Jack to connect two Jack ports. 'source' and 'dest' can be strings or In/Out ports on a JackOut/JackIn, as long as their name has not been changed.
| ||
disconnect() |
disconnect ('source', 'dest') -- asks Jack to disconnect two Jack ports. 'source' and 'dest' can be strings or In/Out ports on a JackOut/JackIn, as long as their name has not been changed.
| ||
jackd_is_down |
whether jackd is currently processing.
| ||
ports |
all ports on the jack graph.
| ||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
JackIn | Index | ||
Plugin, NamedObject | |||
a plugin that streams audio data in from the Jack graph. |
|||
JackIn (jack, n_ports) |
|||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
JackOut | Index | ||
Plugin, NamedObject | |||
a plugin that streams audio signal out into the Jack graph. |
|||
JackOut (jack, n_ports) |
|||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
L | |||
LFO | Index | ||
Plugin, NamedObject | |||
A low-frequency oscillator, built around a wavepoint array. The repeat 'rate' of the oscillator is expressed in ticks/cycle.The 'wave' can be set to an arbitrary collection of numbers up to 'wave_size'; for generating the output stream they are seen as evenly spaced points on the wave. Points inbetween are calculated by a third order spline algorithm, so if you want a square wave LFO you need to turn off 'interpolate' or set a lot of points and accept a certain overshoot. |
|||
LFO ([rate = 144, wave = (0, 1), interpolate = 1]) |
|||
interpolate |
whether to apply 3rd order spline interpolation.
| ||
rate |
the wave cycle repeats every 'rate' ticks.
| ||
wave |
a tuple of numbers describing evenly spaced points on the wave.
| ||
wave_size |
how many points the wave can contain.
| ||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
LadspaIn | Index | ||
In, NamedObject | |||
an In port on a LadspaPlugin. You cannot instantiate this in Python. |
|||
bounds |
a tuple containing the lower and upper bound for the port value.
| ||
hints |
the hints define additional information about the port value, according to the constants defined in LadspaPlugin.
| ||
value |
the current port value.
| ||
connected, n_slots, peek, plugin, pop, type (In), name (NamedObject) | |||
| |||
LadspaOut | Index | ||
Out, ObjectFifo, NamedObject | |||
an Out port on a LadspaPlugin. You cannot instantiate this in Python. |
|||
bounds |
a tuple containing the lower and upper bound for the port value.
| ||
hints |
the hints define additional information about the port value, according to the constants defined in LadspaPlugin.
| ||
value |
the current port value.
| ||
cache, connect, connected, plugin, type (Out), have, pop, push, size (ObjectFifo), name (NamedObject) | |||
| |||
LadspaPlugin | Index | ||
Plugin, NamedObject | |||
an external audio DSP plugin. The plugin is searched for in '/usr/local/lib/ladspa/' if the filename does not mention a path (contains no '/'). If the file opened contains more than one plugin, use 'name' to choose one by its name, or 'id' to choose by its unique LADSPA ID. The default is to instantiate the first plugin from the file. |
|||
LadspaPlugin ('ladspa.so'[, name = 'name' or id = -1, subcycle = 0]) |
|||
id |
the plugin's unique ID, according to its LADSPA descriptor.
| ||
instantiate() |
instantiate ([sample rate = 44100]) -- prepare the plugin for off-line processing.
| ||
label |
the plugin's LADSPA descriptor member 'Name'.
| ||
next() |
next() -- returns the next plugin from the shared object, or None.
| ||
one_shot() |
one_shot (audio signals) -- run the plugin on the provided audio data.
| ||
reset() |
reset() -- resets the plugin's internal state by calling its deactivate() and activate() methods.
| ||
subcycle |
whether to subdivide audio cycles into chunks of 64 samples for better input response.
| ||
BOUNDED_ABOVE | 2 | ||
BOUNDED_BELOW | 1 | ||
INTEGER | 32 | ||
LOGARITHMIC | 16 | ||
SAMPLE_RATE | 8 | ||
TOGGLED | 4 | ||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
Lorenz | Index | ||
Plugin, NamedObject | |||
a Lorenz attractor fractal. only the x coordinate is sampled. for more info go to |
|||
Lorenz ([h = .01]) |
|||
h |
hrequency, > 0 and < 0.026 are stable.
| ||
one_shot() |
one_shot (signal) -- sample the fractal into 'signal'.
| ||
step() |
step ([n=1]) -- progress the fractal by n steps and return (x, y, z).
| ||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
M | |||
MidiEvent | Index | ||
Event | |||
Note SysexEvent |
an Event that holds a short MIDI message in up to four bytes. The message bytes are accessible through the sequence protocol [ ]. Depending on the message contents, there's also access through the more meaningful attributes. Although some quiet saturation mechanisms are applied to the message bytes, the message is not strictly enforced to adher to the MIDI standard. For sending arbitrary data to a MIDI device, the SysexEvent is a better candidate. |
||
MidiEvent (bytes or attribute = value pairs) |
|||
channel |
the midi channel. None if status >= 0xF0.
| ||
controller |
the controller. None if status != 0xB0.
| ||
data |
the raw event data bytes.
| ||
is_note_off |
whether the event is a note-off. Can return positive even if status is 0x90 if the velocity 'v' is 0; this is a peculiarity of the MIDI protocol that reduces bandwidth need.
| ||
is_note_on |
whether the event is a note-on. Can return negative even if status is 0x90, see 'is_note_off' documentation.
| ||
note |
the note (60 is middle c). None if status not in (0x80, 0x90, 0xA0).
When setting the note, a string can be used instead of a number. You can either follow common practice, ie. use upper/lower case characters with tick marks (",'`" are recognized) or specify the octave by number. In either case, any number of signs ("b#") following the note and preceding the octave modifier are accepted. | ||
pitch |
the velocity, or value. None if status != 0xE0.
| ||
program |
the program. None if status != 0xC0.
| ||
status |
the status (unshifted hi nibble of the first data byte if < 0xF0).
| ||
sysex |
the manufacturer if the event is Sysex, None otherwise.
| ||
v |
the velocity, or value. None if status < 0x80 or >= 0xE0.
| ||
ChannelPressure | 0xD0 = 208 | ||
Controller | 0xB0 = 176 | ||
NoteOff | 0x80 = 128 | ||
NoteOn | 0x90 = 144 | ||
NotePressure | 0xA0 = 160 | ||
Pitch | 0xE0 = 224 | ||
Program | 0xC0 = 192 | ||
Sysex | 0xF0 = 240 | ||
loop, tick (Event) | |||
| |||
MidiIn | Index | ||
Plugin, NamedObject | |||
receives MIDI events from a file you specify. Common choices for files will be /dev/snd/midiCxDy (ALSA card x device y) or /dev/midiN (OSS midi driver), but fifos work, too. This plugin needs to reside in a graph holding a RTClock to put proper timestamps on the received events. All plugins of this type share a worker thread that wakes them up as soon as incoming data is signalled. This makes their use on regular files dubious. The worker thread is automatically started and stopped; there's no special scheduler plugin you'd need to put into the graph. |
|||
MidiIn ('/path/to/file') |
|||
log |
whether to log incoming MIDI events to the stdmsg file.
| ||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
MidiOut | Index | ||
Plugin, NamedObject | |||
transmits MIDI events on a file you specify, and does this mostly according to the MIDI standard. Common choices for files will be /dev/snd/midiCxDy (ALSA card x device y) or /dev/midiN (OSS midi driver), but fifos work, too. For proper operation this plugin must reside in a graph that also holds a RTClock. This is because the plugin operation is based on the assumption that it drives a MIDI port at a bandwidth of 31.25 kbit (raw). Therefore it requires a precise idea of time to properly schedule the streaming of outbound MIDI data. This plugin sends a continuous stream of MIDI clock signals (bytes of value 0xF8) by default. This can be disabled by setting 'do_clock' to 0. |
|||
MidiOut ('/path/to/file'[, do_clock = 1]) |
|||
do_clock |
whether to send MIDI clock sync signals.
| ||
log |
whether to log the outgoing MIDI bytestream to the stdmsg file.
| ||
predelay |
to achieve the best timing of events, they should be written to the MIDI device early to compensate for the message transmission time. This predelay value defaults to about one millisecond (transmission time for three bytes, or a note-on message).
| ||
throughput |
assumed bandwidth of the MIDI interface in bytes/second.
| ||
write() |
write (event) -- send a MIDI event right now.
| ||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
MidiPlexer | Index | ||
Plugin, NamedObject | |||
a Plugin that copies one inbound stream of MidiEvents to two outbound streams. SysexEvents are only routed through once. |
|||
MidiPlexer() |
|||
block() |
block (MIDI status) -- don't forward MIDI events with 'status' to the 'thru' output.
| ||
block_mask |
bit mask indicating, per MIDI status nibble, whether to forward events:
bit 0 -- 0x80 = NoteOff bit 1 -- 0x90 = NoteOn bit 2 -- 0xA0 = NotePressure bit 3 -- 0xB0 = Controller bit 4 -- 0xC0 = Program bit 5 -- 0xD0 = ChannelPressure bit 6 -- 0xE0 = Pitch bit 7 -- 0xF0 = Sysex and realtime events | ||
channel |
the channel to impose on MidiEvents routed, or -1.
| ||
is_blocked() |
is_blocked (MIDI status) -- whether MIDI events with 'status' are forwarded to the 'thru' output or not.
| ||
send() |
send (MIDI event) -- send a MIDI event right now. Acquires the plexer mutex, which can, depending on the plexer's input connection, block realtime threads, so use with care.
| ||
unblock() |
unblock (MIDI status) -- do forward MIDI events with 'status' to the 'thru' output.
| ||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
Modulator | Index | ||
Plugin, NamedObject | |||
a plugin that operates in the audio cycle. The incoming FloatEvent signal is multiplied by the 'mul' value, then 'add' is added. NB: This plugin will probably be restructured or dumped in future releases. Your design ideas for a hypothetical Translator plugin that maps from one unit space to another are welcome. |
|||
Modulator ([mul = 1.0, add = 0.0]) |
|||
add |
the value that is added to incoming signal, after multiplying it.
| ||
mul |
the value the incoming signal is multiplied with, before adding 'add'.
| ||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
MpegFile | Index | ||
SoundFile, NamedObject | |||
a read-only file that is accessed through libmpeg3. Use in conjunction with a DiskStream. This plugin is quite demanding in terms of CPU usage. |
|||
MpegFile ('/path/file.mp3'[, stream=0]) |
|||
channels, frame, frames, mode, path, read, sample_rate, seek, write (SoundFile), name (NamedObject) | |||
| |||
N | |||
NamedObject | Index | ||
Object | |||
EventList FFT GUSVoice Granular Graph In OSS ObjectFifo Plugin SoundFile TimeMap |
an Object that has a mutable name. Most of the 'more complicated' object types inherit from this. |
||
NamedObject (name) |
|||
name |
the object name.
| ||
| |||
Noise | Index | ||
Plugin, NamedObject | |||
a plugin that generates noise. |
|||
Noise ([method = 0, rows = 12]) |
|||
method |
0 -- white noise
1 -- pink noisz 2 -- 1/f noisz | ||
one_shot() |
one_shot (signal) -- sample into an AudioSignal.
| ||
rows |
changes the noise character for pink noise.
| ||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
Note | Index | ||
MidiEvent, Event | |||
a MidiEvent with status 0x90 and a length. NB: all plugins except Part will mistake this event for a single note-on. Parts emit a note-off when the Note ends. |
|||
Note ([note = 'Eb4', v = 100, l = 24, tick = 0]) |
|||
l |
the length of the note, in ticks.
| ||
channel, controller, data, is_note_off, is_note_on, note, pitch, program, status, sysex, v (MidiEvent), loop, tick (Event) | |||
| |||
O | |||
OSS | Index | ||
NamedObject | |||
OpenSoundSystem audio output. the output is done in a separate thread and uses blocking write(2) calls. to play audio, pass AudioSignals to the play() method. the signals are written to a fifo, from which they are fetched by the writer thread; you can queue up to 128 AudioSignals for seamless playback. the individual length of the queued signals can be arbitrary, the have() method will return the collected total number of frames currently waiting for playback in the queue, and you can sync to the end of a block write cycle by using the sync() method. only 1 and 2 channel operations are supported. when in 2-channel more, play() will expect 2 AudioSignals instead of just one. |
|||
OSS ([path = '/dev/dsp', channels = 1, sample_rate = 44100, blocksize = 4096]) |
|||
blocksize |
number of frames written per cycle and channel.
| ||
gain |
1 = unity
| ||
have() |
have() -- return the total number of frames queued (comprises all AudioSignals in the queue).
| ||
play() |
play (1 or 2 audio signals) -- queue the AudioSignal(s) for playing. the signal should be discarded and not modified after this call because it is not copied.
| ||
sample_rate |
.
| ||
sync() |
sync() -- sleep until the driver signals a write possible condition.
| ||
write() |
equivalent to play().
| ||
name (NamedObject) | |||
| |||
Object | Index | ||
AudioSignal Event NamedObject Resampler SampledVoice |
basetype to all module types. |
||
Object() |
|||
| |||
ObjectFifo | Index | ||
NamedObject | |||
EventCache Out |
a first-in-first-out queue that is designed for lock-free passage of Objects from one thread to another. |
||
ObjectFifo (size[, name]) |
|||
have |
the number of objects currently stored.
| ||
pop() |
pop() -- return the bottom-most object and remove it from the fifo.
| ||
push() |
push (object) -- push this object into the fifo.
| ||
size |
the number of objects the fifo can hold.
| ||
name (NamedObject) | |||
| |||
OggFile | Index | ||
SoundFile, NamedObject | |||
ogg audio file, read only, no sample rate conversion. |
|||
OggFile ('file.ogg') |
|||
channels, frame, frames, mode, path, read, sample_rate, seek, write (SoundFile), name (NamedObject) | |||
| |||
Out | Index | ||
ObjectFifo, NamedObject | |||
LadspaOut |
a connection point on a plugin that transmits events. You cannot instantiate this in Python; only ScriptPlugins have a new_out() method. |
||
cache |
An EventCache storing events of the type transmitted.
| ||
connect() |
connect (in) -- connect to an In (an input port on a plugin). Connecting to None disconnects. The connection can not be established if the In does not dig the type of events the output produces, or if an audio connection is asked for that would not work with the processing order of the plugins to be connected.
| ||
connected |
The In that receives the signal transmitted, or None.
| ||
plugin |
The Plugin this belongs to.
| ||
type |
The type of events transmitted.
| ||
have, pop, push, size (ObjectFifo), name (NamedObject) | |||
| |||
P | |||
Part | Index | ||
Plugin, NamedObject | |||
a Plugin that streams out events from an event list. |
|||
Part (list) |
|||
channel |
the channel to impose on (MIDI) events traded, or None.
| ||
list |
The EventList.
| ||
mute |
whether to send Note events (all others are sent regardless).
| ||
transpose |
transpose played Notes this many semitones.
| ||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
Plugin | Index | ||
NamedObject | |||
AlsaMidi AlsaStream AudioClock BoundNoise Chebyshev Decycler Delay DiskStream FIR Gain IIR IIWU Inverter JackIn JackOut LFO LadspaPlugin Lorenz MidiIn MidiOut MidiPlexer Modulator Noise Part Pulse RTClock RandomSVF Recycler Roessler SamplePlayer Scheduler ScriptPlugin Sine Spice SweepingSVF VCF VCO |
abstract base type to all functional units in the processing graph. Plugins can have any number of input and output connection points (In and Out respectively) through which they communicate signal data. These ports are listed in the 'ins' and 'outs' tuples. Most plugins will not work correctly, or even not at all, if the graph they are in does not also contain the respective scheduler or clock plugin, for example a DiskStream will not work without a DiskScheduler. You cannot instantiate this in Python; use a ScriptPlugin instead. |
||
cpu |
The maximum time the plugin spent in an audio cycle so far, expressed as a fraction of the time available for a complete audio cycle. The value is reset to zero upon reading.
Note that this figure can only serve as an indicator if the audio cycle is not run at system-wide top priority. | ||
flags |
Beat = 1, Audio = 2, Time = 4, Disk = 8, Script = 16,
the processing cycles the plugin takes part in. | ||
graph |
the graph the plugin resides in.
| ||
ins |
all input connection points.
| ||
loop_cycle |
the sequential number of the loop cycle the plugin is in.
| ||
outs |
all output connection points.
| ||
state |
Off = 0, Prepared = 1, Activated = 2, Rolling = 3.
| ||
x |
the 'x' coordinate, see the documentation on 'y' for what it means.
| ||
y |
the higher the 'y' coordinate, the later a plugin is processed in the various cycles. If two plugins are at the same 'y', 'x' decides over the resulting order.
| ||
Activated | 2 | ||
Off | 0 | ||
Prepared | 1 | ||
Rolling | 3 | ||
name (NamedObject) | |||
| |||
Pulse | Index | ||
Plugin, NamedObject | |||
rectangular, amplitude 1 pulse. |
|||
Pulse ([period = 2048, width = 1]) |
|||
period |
the pulse repeat period, in samples.
| ||
width |
the width of the pulse, in samples.
| ||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
R | |||
RTClock | Index | ||
Plugin, NamedObject | |||
a scheduler plugin that enables a graph to maintain an account of musical and 'real' time. You should add this to any graph that is not limited to purely processing audio. If possible, the linux device file '/dev/rtc' is used to generate a periodic pulse of up to 8192 Hz, but you need root privileges to activate frequencies above the default 64 Hz with an unpatched linux kernel. (Note that 2.4.x kernels allow setting the maximum f available to non-privileged users via /proc/sys/dev/rtc/max-user-freq.) No error is raised if the desired frequency is not available. The actual resolution will never drop below the kernel scheduler frequency, which on unpatched i386 production kernels before 2.6 is 100 Hz. If an audio clock is registered with the same graph, it will be acting as an additional pulse source for this clock. |
|||
RTClock ([frequency = 1024]) |
|||
frequency |
the frequency at which the system real time clock is generating pulses, should be a power of 2.
| ||
jitter |
the maximum time a pulse has been delayed. the value is reset to zero upon reading.
| ||
priority |
the scheduling priority the clock thread enjoys.
| ||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
RandomSVF | Index | ||
Plugin, NamedObject | |||
SVF stepping through random cutoff frequencies. |
|||
RandomSVF ([f, Q]) |
|||
Q |
value from 0 .. 1.
| ||
f |
cutoff frequency, Hz.
| ||
range |
2 MIDI note numbers denoting the cutoff frequency range.
| ||
t |
time between filter changes.
| ||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
Recycler | Index | ||
Plugin, NamedObject | |||
feeds a stream of incoming Samples into cyclic audio signal. See the Decycler documentation. |
|||
Recycler() |
|||
latency |
the difference, in audio frames, between the timing of the incoming DecycledAudio stream and the main audio clock.
| ||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
Resampler | Index | ||
Object | |||
resamples audio signals. |
|||
Resampler (ratio, type = 0) |
|||
resample() |
resample (audio signal) -- return a resampled copy of the signal.
| ||
reset() |
reset() -- call this when applying the resampler to a different signal stream.
| ||
| |||
Roessler | Index | ||
Plugin, NamedObject | |||
a Roessler attractor fractal. the x and z coordinates are sampled. for more info go to |
|||
Lorenz ([h = .01]) |
|||
h |
hrequency, > 0 and < 0.096 are stable.
| ||
one_shot() |
one_shot (signal) -- sample the fractal into 'signal'.
| ||
step() |
step() -- progress the fractal by one step and return (x, y, z).
| ||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
S | |||
SamplePlayer | Index | ||
Plugin, NamedObject | |||
a Plugin containing 128 SampledVoices. |
|||
SamplePlayer ([channels = 2, db = -12]) |
|||
add() |
add (sampled voice) -- add a SampledVoice to the voice list.
| ||
channels |
output channels (1 or 2).
| ||
clear() |
clear() -- remove all voices from the voice list.
| ||
db |
volume control.
| ||
layered |
if True, overlapping voice ranges cause multiple oscillators to kick off. default is False, ie. just one oscillator per note.
| ||
playing |
the currently sounding number of samples.
| ||
remove() |
remove (sampled voice) -- remove a SampledVoice from the voice list.
| ||
transpose |
semitones.
| ||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
SampledVoice | Index | ||
Object | |||
A Sample for use by a SamplePlayer. |
|||
SampledVoice (sample[, property = value, ...]) |
|||
decay |
decay ratio, seconds / 6 dB.
| ||
env2f |
randomize voice filter, from -1 to 1.
| ||
group |
integer identifier for the voice group (terminates other voices of the same group at note-on). Group IDs <= 0 are ignored.
| ||
loop |
tuple of (loop start, loop end) in frames, or None for no loop.
| ||
n2f |
note to filter.
| ||
note |
natural note of the sample.
| ||
pan |
pan position, -1 = left, 0 = center, 1 = right.
| ||
range |
(lowest, highest) note this voice can play.
| ||
release |
release ratio, seconds / 6 dB.
| ||
rnd2f |
randomize voice filter, from -1 to 1.
| ||
tune |
tune, 0. = no tuning, 1. = semitone up.
| ||
v2f |
velocity to filter.
| ||
| |||
Scheduler | Index | ||
Plugin, NamedObject | |||
DiskScheduler ScriptScheduler |
a plugin that is needed for some plugins (those that need to execute code without realtime constraints, eg Script and Disk) to operate. The Scheduler runs a sleeping worker thread that is woken when needed. It then asks the graph it is registered with to run the plugin in question. |
||
Scheduler() |
|||
priority |
the scheduling priority the clock thread enjoys.
| ||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
ScriptPlugin | Index | ||
Plugin, NamedObject | |||
a Plugin that is meant to be subclassed, declaring a 'process' and/or a 'process_beat' method. The 'process' method is called, without arguments, whenever the plugin receives data. The plugin should pop all events from its inputs. The 'process_beat' method, if defined, is called at about every eighth beat. Arguments passed are two ticks describing an interval, usually an eighth note ahead of the current time and spanning the length of an eighth. The plugin is supposed to generate events for the interval in question. NB: Proper event timing around loop points requires plugins to put a correct 'loop' stamp on the events they transmit. A ScriptScheduler is needed in the same graph for the 'process' method to work, the 'process_beat' method needs a RTClock. |
|||
ScriptPlugin() |
|||
new_in() |
new_in (name, type[, slots = 1]) -- add an input point for 'type' events. the input will allow up to 'slots' connecting outputs.
| ||
new_out() |
new_out (name, type) -- add an output point 'name' for 'type' events.
| ||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
ScriptScheduler | Index | ||
Scheduler, Plugin, NamedObject | |||
a plugin that is needed for ScriptPlugins to operate, see the Scheduler base type for the details. |
|||
ScriptScheduler() |
|||
priority (Scheduler), cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
Sine | Index | ||
Plugin, NamedObject | |||
|
|||
Sine (f) |
|||
f |
frequency.
| ||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
SoundFile | Index | ||
NamedObject | |||
MpegFile OggFile WaveFile |
an abstract base type. Derived objects from the extension module can be used by DiskStreams for playback and recording. |
||
channels |
the number of channels.
| ||
frame |
the current file pointer in audio frames.
| ||
frames |
the length, in audio frames.
| ||
mode |
the file access mode.
| ||
path |
where the file is.
| ||
read() |
read (frames) -- returns a tuple of AudioSignals, one per channel.
| ||
sample_rate |
the sample rate.
| ||
seek() |
seek (frames) -- moves the file pointer.
| ||
write() |
write (sequence of AudioSignals[, frames]) -- writes 'frames' from a sequence of signals ('channels'), or as many frames as are found in the shortest signal in the sequence.
| ||
name (NamedObject) | |||
| |||
Spice | Index | ||
Plugin, NamedObject | |||
tone coloring based on a sampled valve response. |
|||
Spice (gain = 0) |
|||
active |
or not.
| ||
bias |
operating point for amplitude mapping. don't change this unless you know what you are doing.
| ||
gain |
gain factors greater than 1 cause hard clipping.
| ||
one_shot() |
one_shot (signal) -- in-place process an AudioSignal.
| ||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
StereoGain | Index | ||
Gain, Plugin, NamedObject | |||
applies gain to two channels of audio. |
|||
StereoGain ([db = 0, do_peak = 0]) |
|||
db, do_peak, factor, peak (Gain), cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
SweepingSVF | Index | ||
Plugin, NamedObject | |||
LFO-modulated SVF |
|||
SweepingSVF ([f, Q]) |
|||
Q |
value from 0 .. 1.
| ||
f |
cutoff frequency, Hz.
| ||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
SysexEvent | Index | ||
MidiEvent, Event | |||
a MidiEvent that can store any number of message bytes. |
|||
SysexEvent ([data, tick]) |
|||
channel, controller, data, is_note_off, is_note_on, note, pitch, program, status, sysex, v (MidiEvent), loop, tick (Event) | |||
| |||
T | |||
TempoEvent | Index | ||
Event | |||
an Event that describes musical tempo in quarter beats per minute. |
|||
TempoEvent ([bpm = 96.0, tick = 0]) |
|||
bpm |
the beats (quarter notes) per minute.
| ||
sec_per_tick |
seconds per tick.
| ||
loop, tick (Event) | |||
| |||
TextEvent | Index | ||
Event | |||
an Event that carries a text. |
|||
TextEvent ([text = '']) |
|||
text |
the label.
| ||
loop, tick (Event) | |||
| |||
TimeEvent | Index | ||
Event | |||
an Event that describes how to measure musical time. |
|||
TimeEvent ([n = 4, d = 4, ticks = 24, tick = 0]) |
|||
bbt |
bar/beat/tick.
| ||
d |
the denominator of time measure.
| ||
n |
the numerator of time measure.
| ||
ticks |
the number of 'tick' time measuring units per denominator of the measure.
| ||
loop, tick (Event) | |||
| |||
TimeMap | Index | ||
NamedObject | |||
a Plugin that manages TimeEvents, enabling it to map tick to bar.beat.tick. |
|||
TimeMap (list) |
|||
add() |
add (event[, t]) -- add 'event' at optional tick 't'.
| ||
bbt() |
bbt (tick) -- returns tick mapped into (bar, beat, tick).
| ||
lock() |
lock() -- by acquiring this lock, the part is protected from other threads' attempts at reading or modifying its contents.
| ||
remove() |
remove (event) -- remove 'event'.
| ||
tick() |
tick (bar, beat = 0, tick = 0) -- returns bar, beat, tick mapped to ticks.
| ||
unlock() |
unlock() -- releases the protecting lock.
| ||
name (NamedObject) | |||
| |||
V | |||
VCF | Index | ||
Plugin, NamedObject | |||
Moog VCF, Variation 1, code by Paul Kellett. Lowpass only. |
|||
VCF ([f, resonance]) |
|||
f |
cutoff frequency, Hertz.
| ||
resonance |
value from 0 .. 1.
| ||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
VCO | Index | ||
Plugin, NamedObject | |||
value-controlled oscillator. the current implementation does not sweep and allows the oscillation shape to be a thing between sawtooth (0) and square (1), with varying pulse width. Basic anti-aliasing is provided; however frequencies beyond about 2 kHz will cause audibly aliased output. |
|||
VCO ([f = n2f ('c'), shape = 0, pw = 1]) |
|||
f |
f, in Hertz.
| ||
pw |
pulse width (0 .. 1), if 'shape' != 0.
| ||
shape |
shape (0 = sawtooth .. 1 = square).
| ||
cpu, flags, graph, ins, loop_cycle, outs, state, x, y (Plugin), name (NamedObject) | |||
| |||
W | |||
WaveFile | Index | ||
SoundFile, NamedObject | |||
an uncompressed PCM audio file in .wav format, accessed via memory-mapped I/O. Use with a DiskStream. If 'mode' is 'w', the file is opened for recording. If bits is an integer in (32, 16, 8) the file will contain integer samples of this bit depth; 'bits' = 0 is for 32 bit IEEE floats. Any other 'bits' value will raise an exception. No sample rate conversions are done. |
|||
WaveFile ('file.wav'[, mode = 'r', channels = 1, sample_rate = 44100, bits = 0]) |
|||
channels, frame, frames, mode, path, read, sample_rate, seek, write (SoundFile), name (NamedObject) | |||
| |||
|
Gtk Reference | |||
RGB FreeType Gtk Glyph Pixels PixelView + CurveView + RGBView + DataView + MidiView + PoleZeroView |
CurveView DataView FreeType Glyph Gtk MidiView PixelView Pixels PoleZeroView RGB RGBView | ||
C | |||
CurveView | Index | ||
PixelView | |||
a PixelView displaying <= 'size' data points. |
|||
CurveView (size[, label = '']) |
|||
clamp |
whether to try and clamp the y scale to [-1, 1].
| ||
label |
is displayed along with the data set.
| ||
redraw() |
redraw() -- redraw the displayed data.
| ||
set() |
set (data object[, offset]) -- set the data viewed.
| ||
size |
the number of sample points.
| ||
blit, pixels, widget (PixelView) | |||
| |||
D | |||
DataView | Index | ||
RGBView | |||
a RGBView displaying the contents of Audio event lists. |
|||
DataView() |
|||
add() |
add (signal) -- add one signal to the list.
| ||
amp_per_pixel |
the y zoom (amplitude / pixel).
| ||
clamp() |
clamp (upper, lower) -- adjust the display window.
| ||
clear() |
remove all signals from the display list.
| ||
frame2x() |
frame2x (x) -- translate 'frame' index into an x coordinate.
| ||
get_length() |
get_length() -- return the length of the longest displayed signal.
| ||
marks |
((index, amplitude, color), ...) -- dataview draws a mark in 'color' at sample 'index' at amplitude 'amplitude'.
| ||
pixels_per_frame |
the x zoom.
| ||
remove() |
remove a signal from the list.
| ||
sample2y() |
sample2y (sample) -- translate 'sample' into a pixel offset.
| ||
x2frame() |
x2frame (x) -- translate 'x' into a frame.
| ||
x_offset |
the frame at which the display window opens on the signals.
| ||
x_scale |
draw vertical grid lines every 'x_scale' pixels.
| ||
y2sample() |
y2sample (sample) -- translate 'y' into a sample pitch.
| ||
y_align |
the y alignment.
| ||
y_offset |
offset all amplitudes by this value.
| ||
y_scale |
a tuple of three sample values indicating where to draw horizontal grid lines.
| ||
redraw, rgb (RGBView), blit, pixels, widget (PixelView) | |||
| |||
F | |||
FreeType | Index | ||
a PixelView displaying the contents of a RGB buffer. |
|||
FreeType() |
|||
alpha |
current transparency
| ||
attach() |
attach (path) -- used to load .afm font metrics.
| ||
color |
current (r, g, b[, alpha])
| ||
draw() |
draw (rgb, u'text', size) -- render a string to an RGB.
| ||
first_char |
first char in charmap
| ||
get_char_index() |
get_char_index (char) -- return the glyph index of a char.
| ||
get_glyph() |
get_glyph (char) -- return the glyph representing 'char'.
| ||
get_glyph_name() |
get_glyph_name (glyph) -- return the name of a glyph.
| ||
glyph() |
glyph (rgb, glyph, size) -- render a glyph to an RGB.
| ||
move() |
move (x, y) -- move the baseline start, resets the relative position.
| ||
ps_name |
the PostScript name of the font face.
| ||
rotate() |
rotate (angle) -- set the glyph transform matrix.
| ||
shift() |
shift (x, y) -- shift the next glyph by (x, y).
| ||
size |
current pixel size
| ||
x |
current x of the pen
| ||
y |
current x of the pen baseline
| ||
| |||
G | |||
Glyph | Index | ||
one glyph. |
|||
advance |
advance vector
| ||
contours |
outline contours, a list of individual outlines in the glyph, each of them a list of points. A point is ((x, y), tags) where 'tags' has bit 0 set if the point is on the line. If it is off the line (clear bit) then bit 1 turns it into a 3rd-order control point, otherwise it is 2nd-order.
| ||
points |
outline points
| ||
render() |
render (rgb) -- render the outline into an rgb buffer
| ||
| |||
Gtk | Index | ||
a thread running the Gtk+ event loop. |
|||
Gtk() |
|||
| |||
M | |||
MidiView | Index | ||
RGBView | |||
a RGBView displaying the contents of Midi event lists. |
|||
MidiView() |
|||
add() |
add (eventlist) -- add an eventlist to the view.
| ||
get_marks_around() |
get_marks_around (tick) -- returns (t0, t1) ticks of the closest previous and next mark around 'tick', or None.
| ||
hit_test() |
hit_test (x, y) -- return the object at (x, y) or None.
| ||
marks |
the marks.
| ||
new_mark() |
new_mark (name, tick, color = (0, 0, 0)) -- create a mark at 'tick'. returns the mark object.
| ||
note2y() |
note2y (note) -- translate 'note' into a pixel offset.
| ||
remove() |
remove (eventlist) -- remove 'eventlist'.
| ||
remove_mark() |
remove_mark (mark) -- remove 'mark'.
| ||
scroll() |
scroll (dx, dy) -- scroll the view by (dx, dy) pixels.
| ||
snap |
the basic tick grid step.
| ||
tick2x() |
tick2x (tick) -- translate 'tick' into a pixel offset.
| ||
time_map |
the time map.
| ||
x0 |
the left pixel offset.
| ||
x2tick() |
x2tick (x) -- translate 'x' into a tick.
| ||
y0 |
the top pixel offset.
| ||
y2note() |
y2note (note) -- translate 'y' into a note pitch.
| ||
zoom_x() |
zoom_x (delta, x) -- zoom in or out horizontally, depending on the sign of 'delta'. other than that, the value of 'delta' is ignored. 'x' is consulted for realigning the view (should be the mouse position).
| ||
zoom_y() |
zoom_y (delta) -- zoom in or out vertically, depending on the sign of 'delta'. other than that, the value of 'delta' is ignored. 'y' is consulted for realigning the view (should be the mouse position).
| ||
Mark |
MidiView.Mark() -- marks a point in time within a MidiView.
| ||
redraw, rgb (RGBView), blit, pixels, widget (PixelView) | |||
| |||
P | |||
PixelView | Index | ||
CurveView RGBView |
a DrawingArea displaying Pixels. |
||
PixelView() |
|||
blit() |
blit() -- copy the pixels to the widget.
| ||
pixels |
the Pixels viewed.
| ||
widget |
the pygtk widget.
| ||
| |||
Pixels | Index | ||
a Gdk pixmap. |
|||
Pixels (widget, width, height) |
|||
draw() |
draw (rgb) -- draw the contents of 'rgb'.
| ||
height |
.
| ||
name |
the object name.
| ||
width |
.
| ||
| |||
PoleZeroView | Index | ||
RGBView | |||
a RGBView displaying poles and zeros and the unit circle. |
|||
PoleZeroView() |
|||
get_amplitude_response() |
get_amplitude_response (n) -- compute the amplitude response over 'n' frequency bins. returns an AudioSignal.
| ||
imag2y() |
imag2y (imag) -- translate 'imag' into a pixel offset.
| ||
mark |
complex number where a mark is drawn.
| ||
poles |
the poles (tuple of complex numbers).
| ||
real2x() |
real2x (x) -- translate 'real' into an x coordinate.
| ||
x2real() |
x2real (x) -- translate 'x' into a real.
| ||
y2imag() |
y2imag (imag) -- translate 'y' into a imag.
| ||
zeros |
the zeros (tuple of complex numbers).
| ||
redraw, rgb (RGBView), blit, pixels, widget (PixelView) | |||
| |||
R | |||
RGB | Index | ||
a RGB buffer. |
|||
RGB (width, height) |
|||
blend() |
blend (rgb_a, rgb_b, ab) -- render the 'ab'-weighted combination of 'rgb_a' and 'rgb_b' into this image. 'ab' is from [0 .. 1]
| ||
copy() |
copy (rgb) -- copy pixels from 'rgb'.
| ||
fill() |
fill (x, y, width, height, (r, g, b[, a])) -- fill a rectangle with a color.
| ||
height |
.
| ||
name |
the object name.
| ||
width |
.
| ||
write_tga() |
write_tga (path) -- write an RLE-compressed targa image file.
| ||
| |||
RGBView | Index | ||
PixelView | |||
DataView MidiView PoleZeroView |
a PixelView displaying the contents of a RGB buffer. |
||
RGBView() |
|||
redraw() |
redraw ([x, y, width, height]) -- redraw a rectangle from the contents of the associated RGB buffer. if called without arguments, the complete image is redrawn. otherwise, you must specify all four measures.
| ||
rgb |
the rgb buffer associated.
| ||
blit, pixels, widget (PixelView) | |||
| |||
This file has been created by tools/html.py for 'tim@autumn', |