The following example demonstrates how to upload the DBpedia data sets into Virtuoso using the Bulk Loading Sequence.
tmp" in your filesystem, and it is within a directory specified in the DirsAllowed param defined in your virtuoso.ini file.
tmp" folder .bz2" data set files need to be uncompressed first as the bulk loader scripts only supports the auto extraction of gzip'ed ".gz" files.
http://dbpedia.org":
SQL> ld_dir ('tmp', '*.*', 'http://dbpedia.org'); Done. -- 90 msec.
<name>.<ext>.gz, e.g., 'ontology.owl.gz' or ontology.nt.gz tmp), then their content will also be loaded into the specified graph.
global.graph in the "tmp" folder, with its entire content being the URI of the desired target graph, e.g.,
http://dbpedia.org
rdf_loader_run procedure.
This may take some time, depending on the size of the data sets.
SQL> rdf_loader_run (); Done. -- 100 msec.
10:21:50 PL LOG: Loader started 10:21:50 PL LOG: No more files to load. Loader has finished
checkpoint to commit all transactions to the database.
SQL> checkpoint; Done. -- 53 msec.
SQL> SPARQL SELECT COUNT(*) FROM <http://dbpedia.org> WHERE { ?s ?p ?o } ;
SQL> vad_install ('dbpedia_dav.vad', 0); SQL> vad_install ('rdf_mappers_dav.vad', 0);
http://<your-cname>:<your-port>/resource/Bob_Marley