sqlite3.OperationalError: database or disk is full

I’m receiving the following error when I’m executing an import or executing the upgrade command. Not sure what it’s referring to, but I currently have more than 5 TB of space left on my QNAP NAS. Not sure how to address it.

Traceback (most recent call last):
File “/opt/bin/beet”, line 11, in
load_entry_point(‘beets==1.4.6’, ‘console_scripts’, ‘beet’)()
File “/opt/lib/python2.7/site-packages/beets/ui/init.py”, line 1256, in main
_raw_main(args)
File “/opt/lib/python2.7/site-packages/beets/ui/init.py”, line 1243, in _raw_main
subcommand.func(lib, suboptions, subargs)
File “/opt/lib/python2.7/site-packages/beets/ui/commands.py”, line 934, in import_func
import_files(lib, paths, query)
File “/opt/lib/python2.7/site-packages/beets/ui/commands.py”, line 911, in import_files
session.run()
File “/opt/lib/python2.7/site-packages/beets/importer.py”, line 325, in run
pl.run_parallel(QUEUE_SIZE)
File “/opt/lib/python2.7/site-packages/beets/util/pipeline.py”, line 445, in run_parallel
six.reraise(exc_info[0], exc_info[1], exc_info[2])
File “/opt/lib/python2.7/site-packages/beets/util/pipeline.py”, line 312, in run
out = self.coro.send(msg)
File “/opt/lib/python2.7/site-packages/beets/util/pipeline.py”, line 171, in coro
task = func(*(args + (task,)))
File “/opt/lib/python2.7/site-packages/beets/importer.py”, line 1348, in user_query
apply_choice(session, task)
File “/opt/lib/python2.7/site-packages/beets/importer.py”, line 1415, in apply_choice
task.add(session.lib)
File “/opt/lib/python2.7/site-packages/beets/importer.py”, line 716, in add
self.remove_replaced(lib)
File “/opt/lib/python2.7/site-packages/beets/importer.py”, line 791, in remove_replaced
dup_item.remove()
File “/opt/lib/python2.7/site-packages/beets/library.py”, line 741, in remove
super(Item, self).remove()
File “/opt/lib/python2.7/site-packages/beets/library.py”, line 349, in remove
super(LibModel, self).remove()
File “/opt/lib/python2.7/site-packages/beets/dbcore/db.py”, line 422, in remove
(self.id,)
File “/opt/lib/python2.7/site-packages/beets/dbcore/db.py”, line 693, in mutate
cursor = self.db._connection().execute(statement, subvals)
sqlite3.OperationalError: database or disk is full

How much space have you got left in /tmp? It looks like SQLite uses /tmp for storing some temporary files (according to Google).

How do you figure that out? I believe 64mb based on what google delivered.

`

It seems that the people over at Headphones had a similar experience. They were able to address by adding a custom location for the /tmp to be set at so that the default wouldn’t be used. Is it possible to do the same here?

You can also try just cleaning out your /tmp directory.

You can check how much space is left in your temporary directory using df;

df -h /tmp

You may be able to change the temporary path by setting TMPDIR in your environment;

TMPDIR=/new/temporary/path beet

If that works, you can permanently set it by adding this to your .bashrc in your home directory.

export TMPDIR=/new/temporary/path

Although this may affect other software using SQLite too.

Thank you adrian and jackwilson for your time and consideration!

This is what I got immediately after beets resulted in “out of memory”

[~] # df -h /tmp                                                            
Filesystem                Size      Used Available Use% Mounted on
tmpfs                    64.0M    632.0K     63.4M   1% /tmp

I executed the command to change the directory, didn’t receive errors after running the command, but when I tried to run the beet import command, I got the same “out of memory” error.

Running into a weird conundrum here.

Well your /tmp directory is completely full, which is definitely the cause of the error. You need to prefix all of your commands with the TMPDIR change if you are not changing it permanently (by adding the export command in my previous post to your .bashrc);

TMPDIR=/new/temporary/path beet import ...
TMPDIR=/new/temporary/path beet ls

You can change the temporary directory for only the current session (i.e. the current shell) by just running this;

export TMPDIR=/new/temporary/path

And then by using beet normally. Disconnecting or opening a new session with reset the temporary path back to /tmp.

Thanks for the feedback!

I tried to execute as you stated the following command but I continue to receive the “out of memory” error.

The verbatim command that I ran was:

TMPDIR=/share/CACHEDEV1_DATA/Multimedia/Music/Data/tmp/ beet import /share/CACHEDEV1_DATA/Multimedia/[Fresh]/Music/Electronic/

I then received the same error output as in the original post.

Even with the directory change I continue to receive this error. It doesn’t seem logical.

I just wish to crosspost this to the QNAP forums in case someone figures this out as the user who presented beets is also having trouble and he couldn’t figure it out.

https://forum.qnap.com/viewtopic.php?p=630537#p630537

I inquired about this issue over at the SQL discussion board over at QNAP’s forums, and I was given this response:

The QTS /tmp can’t be relocated or resized easily. Good policy for an application requiring large amount of temporary space would be to make use of an app specific temporary folder, configurable path, or follow at least the TMP or TEMP environment variables.

Would it be possible to add the feature to define a custom path for the /tmp directory in the config file? I tried defining the path by adding the line, tmpdir: “/share/CACHEDEV1_DATA/Multimedia/Music/Data/tmp/”, under “paths:” but I didn’t have any luck with that.

Thanks!
-Z

Hmm… looking back, I think /tmp was a red herring here. Your df output suggests that your /tmp is only 1% full, and I can’t imagine SQLite could want 64M of its own space.

I don’t have other obvious ideas about why SQLite would be throwing this error for you. Maybe it’s worth trying deleting your database file and starting over, in case there’s some kind of corruption?

Ok, this worked, but I needed to go a step further by completely removing beets and deleting the data folder then reinstalling it. Thank you adrian and jackwilsdon for your support!

Oops yes, it looks like I misread the output from df! Sorry @Zoroaster! :frowning:

Glad it’s now resolved.