The simplest way to use khmer’s functionality is through the command line scripts, located in the scripts/ directory of the khmer distribution. Below is our documentation for these scripts. Note that all scripts can be given -h which will print out a list of arguments taken by that script.
Many scripts take -x and -N parameters, which drive khmer’s memory usage. These parameters depend on details of your data set; for more information on how to choose them, see Choosing table sizes for khmer.
You can also override the default values of --ksize/-k, --n_tables/-N, and --min-tablesize/-x with the environment variables KHMER_KSIZE, KHMER_N_TABLES, and KHMER_MIN_TABLESIZE respectively.
Note
Almost all scripts take in either FASTA and FASTQ format, and output the same. Some scripts may only recognize FASTQ if the file ending is ‘.fq’ or ‘.fastq’, at least for now.
Files ending with ‘.gz’ will be treated as gzipped files, and files ending with ‘.bz2’ will be treated as bzip2’d files.
Build a k-mer counting table from the given sequences.
usage: load-into-counting.py [-h] [–version] [-q] [–ksize KSIZE] [–n_tables N_TABLES] [–min-tablesize MIN_TABLESIZE] [–threads N_THREADS] [-b] output_countingtable_filename input_sequence_filename [input_sequence_filename ...]
The name of the file to write the k-mer counting table to.
The names of one or more FAST[AQ] input sequence files.
show this help message and exit
show program’s version number and exit
k-mer size to use
number of k-mer counting tables to use
lower bound on tablesize to use
Number of simultaneous threads to execute
Do not count k-mers past 255
Note: with -b the output will be the exact size of the k-mer counting table and this script will use a constant amount of memory. In exchange k-mer counts will stop at 255. The memory usage of this script with -b will be about 1.15x the product of the -x and -N numbers.
Example:
load_into_counting.py -k 20 -x 5e7 out.kh data/100k-filtered.fa
Multiple threads can be used to accelerate the process, if you have extra cores to spare.
Example:
load_into_counting.py -k 20 -x 5e7 -T 4 out.kh data/100k-filtered.fa
Calculate abundance distribution of the k-mers in the sequence file using a pre-made k-mer counting table.
usage: abundance-dist.py [-h] [-z] [-s] [–version] input_counting_table_filename input_sequence_filename output_histogram_filename
The name of the input k-mer counting table file.
The name of the input FAST[AQ] sequence file.
The columns are: (1) k-mer abundance, (2) k-mer count, (3) cumulative count, (4) fraction of total distinct k-mers.
show this help message and exit
Do not output 0-count bins
Overwrite output file if it exists
show program’s version number and exit
Calculate the abundance distribution of k-mers from a single sequence file.
usage: abundance-dist-single.py [-h] [–version] [-q] [–ksize KSIZE] [–n_tables N_TABLES] [–min-tablesize MIN_TABLESIZE] [–threads THREADS] [-z] [-b] [-s] [–savetable filename] input_sequence_filename output_histogram_filename
The name of the input FAST[AQ] sequence file.
The name of the output histogram file. The columns are: (1) k-mer abundance, (2) k-mer count, (3) cumulative count, (4) fraction of total distinct k-mers.
show this help message and exit
show program’s version number and exit
k-mer size to use
number of k-mer counting tables to use
lower bound on tablesize to use
Number of simultaneous threads to execute
Do not output 0-count bins
Do not count k-mers past 255
Overwrite output file if it exists
Save the k-mer counting table to the specified filename.
Note that with -b this script is constant memory; in exchange, k-mer counts will stop at 255. The memory usage of this script with -b will be about 1.15x the product of the -x and -N numbers.
To count k-mers in multiple files use load_into_counting.py and abundance_dist.py.
Trim sequences at a minimum k-mer abundance.
usage: filter-abund.py [-h] [–version] [-q] [–ksize KSIZE] [–n_tables N_TABLES] [–min-tablesize MIN_TABLESIZE] [–threads THREADS] [–cutoff CUTOFF] [–variable-coverage] [–normalize-to NORMALIZE_TO] [-o optional_output_filename] input_presence_table_filename input_sequence_filename [input_sequence_filename ...]
The input k-mer presence table filename
Input FAST[AQ] sequence filename
show this help message and exit
show program’s version number and exit
k-mer size to use
number of k-mer counting tables to use
lower bound on tablesize to use
Number of simultaneous threads to execute
Trim at k-mers below this abundance.
Only trim low-abundance k-mers from sequences that have high coverage.
Base the variable-coverage cutoff on this median k-mer abundance.
Output the trimmed sequences into a single file with the given filename instead of creating a new file for each input file.
Trimmed sequences will be placed in ${input_sequence_filename}.abundfilt for each input sequence file. If the input sequences are from RNAseq or metagenome sequencing then --variable-coverage should be used.
Example:
load-into-counting.py -k 20 -x 5e7 table.kh data/100k-filtered.fa
filter-abund.py -C 2 table.kh data/100k-filtered.fa
Trims sequences at a minimum k-mer abundance (in memory version).
usage: filter-abund-single.py [-h] [–version] [-q] [–ksize KSIZE] [–n_tables N_TABLES] [–min-tablesize MIN_TABLESIZE] [–threads THREADS] [–cutoff CUTOFF] [–savetable filename] input_sequence_filename
FAST[AQ] sequence file to trim
show this help message and exit
show program’s version number and exit
k-mer size to use
number of k-mer counting tables to use
lower bound on tablesize to use
Number of simultaneous threads to execute
Trim at k-mers below this abundance.
If present, the name of the file to save the k-mer counting table to
Trimmed sequences will be placed in ${input_sequence_filename}.abundfilt.
This script is constant memory.
To trim reads based on k-mer abundance across multiple files, use load-into-counting.py and filter-abund.py.
Example:
filter-abund-single.py -k 20 -x 5e7 -C 2 data/100k-filtered.fa
Count k-mers summary stats for sequences
usage: count-median.py [-h] [–version] input_counting_table_filename input_sequence_filename output_summary_filename
input k-mer count table filename
input FAST[AQ] sequence filename
output summary filename
show this help message and exit
show program’s version number and exit
Count the median/avg k-mer abundance for each sequence in the input file, based on the k-mer counts in the given k-mer counting table. Can be used to estimate expression levels (mRNAseq) or coverage (genomic/metagenomic).
The output file contains sequence id, median, average, stddev, and seq length.
NOTE: All ‘N’s in the input sequences are converted to ‘G’s.
Count the overlap k-mers which are the k-mers appearing in two sequence datasets.
usage: count-overlap.py [-h] [–version] [-q] [–ksize KSIZE] [–n_tables N_TABLES] [–min-tablesize MIN_TABLESIZE] input_presence_table_filename input_sequence_filename output_report_filename
input k-mer presence table filename
input sequence filename
output report filename
show this help message and exit
show program’s version number and exit
k-mer size to use
number of k-mer counting tables to use
lower bound on tablesize to use
An additional report will be written to ${output_report_filename}.curve containing the increase of overlap k-mers as the number of sequences in the second database increases.
Load, partition, and annotate FAST[AQ] sequences
usage: do-partition.py [-h] [–version] [-q] [–ksize KSIZE] [–n_tables N_TABLES] [–min-tablesize MIN_TABLESIZE] [–subset-size SUBSET_SIZE] [–no-big-traverse] [–threads N_THREADS] [–keep-subsets] graphbase input_sequence_filename [input_sequence_filename ...]
base name for output files
input FAST[AQ] sequence filenames
show this help message and exit
show program’s version number and exit
k-mer size to use
number of k-mer counting tables to use
lower bound on tablesize to use
Set subset size (usually 1e5-1e6 is good)
Truncate graph joins at big traversals
Number of simultaneous threads to execute
Keep individual subsets (default: False)
Load in a set of sequences, partition them, merge the partitions, and annotate the original sequences files with the partition information.
This script combines the functionality of load-graph.py, partition-graph.py, merge-partitions.py, and annotate-partitions.py into one script. This is convenient but should probably not be used for large data sets, because do-partition.py doesn’t provide save/resume functionality.
Load sequences into the compressible graph format plus optional tagset.
usage: load-graph.py [-h] [–version] [-q] [–ksize KSIZE] [–n_tables N_TABLES] [–min-tablesize MIN_TABLESIZE] [–threads N_THREADS] [–no-build-tagset] output_presence_table_filename input_sequence_filename [input_sequence_filename ...]
output k-mer presence table filename.
input FAST[AQ] sequence filename
show this help message and exit
show program’s version number and exit
k-mer size to use
number of k-mer counting tables to use
lower bound on tablesize to use
Number of simultaneous threads to execute
Do NOT construct tagset while loading sequences
See extract-partitions.py for a complete workflow.
Partition a sequence graph based upon waypoint connectivity
usage: partition-graph.py [-h] [–stoptags filename] [–subset-size SUBSET_SIZE] [–no-big-traverse] [–version] [–threads THREADS] basename
basename of the input k-mer presence table + tagset files
show this help message and exit
Use stoptags in this file during partitioning
Set subset size (usually 1e5-1e6 is good)
Truncate graph joins at big traversals
show program’s version number and exit
Number of simultaneous threads to execute
The resulting partition maps are saved as ‘${basename}.subset.#.pmap’ files.
See ‘Artifact removal’ to understand the stoptags argument.
Merge partition map ‘.pmap’ files.
usage: merge-partition.py [-h] [–ksize KSIZE] [–keep-subsets] [–version] graphbase
basename for input and output files
show this help message and exit
k-mer size (default: 32)
Keep individual subsets (default: False)
show program’s version number and exit
Take the ${graphbase}.subset.#.pmap files and merge them all into a single ${graphbase}.pmap.merged file for annotate-partitions.py to use.
Annotate sequences with partition IDs.
usage: annotate-partitions.py [-h] [–ksize KSIZE] [–version] graphbase input_sequence_filename [input_sequence_filename ...]
basename for input and output files
input FAST[AQ] sequences to annotate.
show this help message and exit
k-mer size (default: 32)
show program’s version number and exit
Load in a partitionmap (generally produced by partition-graph.py or merge-partitions.py) and annotate the sequences in the given files with their partition IDs. Use extract-partitions.py to extract sequences into separate group files.
Example (results will be in random-20-a.fa.part):
load-graph.py -k 20 example tests/test-data/random-20-a.fa
partition-graph.py example
merge-partitions.py -k 20 example
annotate-partitions.py -k 20 example tests/test-data/random-20-a.fa
Separate sequences that are annotated with partitions into grouped files.
usage: extract-partitions.py [-h] [–max-size MAX_SIZE] [–min-partition-size MIN_PART_SIZE] [–no-output-groups] [–output-unassigned] [–version] output_filename_prefix input_partition_filename [input_partition_filename ...]
show this help message and exit
Max group size (n sequences)
Minimum partition size worth keeping
Do not actually output groups files.
Output unassigned sequences, too
show program’s version number and exit
Example (results will be in example.group0000.fa):
load-graph.py -k 20 example tests/test-data/random-20-a.fa
partition-graph.py example
merge-partitions.py -k 20 example
annotate-partitions.py -k 20 example tests/test-data/random-20-a.fa
extract-partitions.py example random-20-a.fa.part
The following scripts are specialized scripts for finding and removing highly-connected k-mers (HCKs). See Partitioning large data sets (50m+ reads).
Find an initial set of highly connected k-mers.
usage: make-initial-stoptags.py [-h] [–version] [-q] [–ksize KSIZE] [–n_tables N_TABLES] [–min-tablesize MIN_TABLESIZE] [–subset-size SUBSET_SIZE] [–stoptags filename] graphbase
basename for input and output filenames
show this help message and exit
show program’s version number and exit
k-mer size to use
number of k-mer counting tables to use
lower bound on tablesize to use
Set subset size (default 1e4 is prob ok)
Use stoptags in this file during partitioning
Loads a k-mer presence table/tagset pair created by load-graph.py, and does a small set of traversals from graph waypoints; on these traversals, looks for k-mers that are repeatedly traversed in high-density regions of the graph, i.e. are highly connected. Outputs those k-mers as an initial set of stoptags, which can be fed into partition-graph.py, find-knots.py, and filter-stoptags.py.
The k-mer counting table size options parameters are for a k-mer counting table to keep track of repeatedly-traversed k-mers. The subset size option specifies the number of waypoints from which to traverse; for highly connected data sets, the default (1000) is probably ok.
Find all highly connected k-mers.
usage: find-knots.py [-h] [–n_tables N_TABLES] [–min-tablesize MIN_TABLESIZE] [–version] graphbase
Basename for the input and output files.
show this help message and exit
number of k-mer counting tables to use
lower bound on the size of the k-mer counting table(s)
show program’s version number and exit
Load an k-mer presence table/tagset pair created by load-graph, and a set of pmap files created by partition-graph. Go through each pmap file, select the largest partition in each, and do the same kind of traversal as in make-initial-stoptags.py from each of the waypoints in that partition; this should identify all of the HCKs in that partition. These HCKs are output to <graphbase>.stoptags after each pmap file.
Parameter choice is reasonably important. See the pipeline in Partitioning large data sets (50m+ reads) for an example run.
This script is not very scalable and may blow up memory and die horribly. You should be able to use the intermediate stoptags to restart the process, and if you eliminate the already-processed pmap files, you can continue where you left off.
Trim sequences at stoptags.
usage: filter-stoptags.py [-h] [–ksize KSIZE] [–version] input_stoptags_filename input_sequence_filename [input_sequence_filename ...]
show this help message and exit
k-mer size
show program’s version number and exit
Load stoptags in from the given .stoptags file and use them to trim or remove the sequences in <file1-N>. Trimmed sequences will be placed in <fileN>.stopfilt.
Do digital normalization (remove mostly redundant sequences)
usage: normalize-by-median.py [-h] [–version] [-q] [–ksize KSIZE] [–n_tables N_TABLES] [–min-tablesize MIN_TABLESIZE] [-C CUTOFF] [-p] [-s filename] [-R filename] [-f] [–save-on-failure] [-d DUMP_FREQUENCY] [-o filename] [-l filename] input_sequence_filename [input_sequence_filename ...]
Input FAST[AQ] sequence filename.
show this help message and exit
show program’s version number and exit
k-mer size to use
number of k-mer counting tables to use
lower bound on tablesize to use
continue on next file if read errors are encountered
Save k-mer counting table when an error occurs
dump k-mer counting table every d files
only output a single file with the specified filename
load a precomputed k-mer table from disk
Discard sequences based on whether or not their median k-mer abundance lies above a specified cutoff. Kept sequences will be placed in <fileN>.keep.
Paired end reads will be considered together if -p is set. If either read will be kept, then both will be kept. This should result in keeping (or discarding) each sequencing fragment. This helps with retention of repeats, especially.
With -s/--savetable, the k-mer counting table will be saved to the specified file after all sequences have been processed. With -d, the k-mer counting table will be saved every d files for multifile runs; if -s is set, the specified name will be used, and if not, the name backup.ct will be used. -l/--loadtable will load the specified k-mer counting table before processing the specified files. Note that these tables are are in the same format as those produced by load-into-counting.py and consumed by abundance-dist.py.
-f/--fault-tolerant will force the program to continue upon encountering a formatting error in a sequence file; the k-mer counting table up to that point will be dumped, and processing will continue on the next file.
Example:
normalize-by-median.py -k 17 tests/test-data/test-abund-read-2.fa
Example:
normalize-by-median.py -p -k 17 tests/test-data/test-abund-read-paired.fa
Example:
normalize-by-median.py -k 17 -f tests/test-data/test-error-reads.fq tests/test-data/test-fastq-reads.fq
Example:
normalize-by-median.py -k 17 -d 2 -s test.ct tests/test-data/test-abund-read-2.fa tests/test-data/test-fastq-reads
Take a mixture of reads and split into pairs and orphans.
usage: extract-paired-reads.py [-h] [–version] infile
show this help message and exit
show program’s version number and exit
The output is two files, <input file>.pe and <input file>.se, placed in the current directory. The .pe file contains interleaved and properly paired sequences, while the .se file contains orphan sequences.
Many assemblers (e.g. Velvet) require that you give them either perfectly interleaved files, or files containing only single reads. This script takes files that were originally interleaved but where reads may have been orphaned via error filtering, application of abundance filtering, digital normalization in non-paired mode, or partitioning.
Example:
extract-paired-reads.py tests/test-data/paired.fq
Produce interleaved files from R1/R2 paired files
usage: interleave-reads.py [-h] [-o filename] [–version] infiles [infiles ...]
show this help message and exit
show program’s version number and exit
The output is an interleaved set of reads, with each read in <R1> paired with a read in <R2>. By default, the output goes to stdout unless -o/--output is specified.
As a “bonus”, this file ensures that read names are formatted in a consistent way, such that they look like the pre-1.8 Casava format (@name/1, @name/2).
Example:
interleave-reads.py tests/test-data/paired.fq.1 tests/test-data/paired.fq.2 -o paired.fq
Split interleaved reads into two files, left and right.
usage: split-paired-reads.py [-h] [–version] infile
show this help message and exit
show program’s version number and exit
Some programs want paired-end read input in the One True Format, which is interleaved; other programs want input in the Insanely Bad Format, with left- and right- reads separated. This reformats the former to the latter.
Example:
split-paired-reads.py tests/test-data/paired.fq
This file can be edited directly through the Web. Anyone can update and fix errors in this document with few clicks -- no downloads needed.
For an introduction to the documentation format please see the reST primer.