Seeing a performance problem with large sets of data?

Dec 31, 2012 at 4:32 PM


I think I am seeing some performance degradation as the number of records in a table increases.  I'm using the following script:

$strCurrentPath = (Get-Location).Path
Import-Module $strCurrentPath\SQLite
$dbDataFile = "$strCurrentPath\testdb.sqlite"

rm -Force $dbDataFile

New-PSDrive -PSProvider SQLite -Name filesdb -Root "Data source=$dbDataFile;Cache Size=2097152;Page Size=2097152;Synchronous=Off;"
new-item -path filesdb:/Files -value "id INTEGER PRIMARY KEY, filepath TEXT NOT NULL UNIQUE, presize INTEGER, postsize INTEGER, hash TEXT"

for ($i = 0; $i -lt 500; $i++) {
	Write-Host "Record $i"
	measure-command { New-Item filesdb:/Files -filepath "string $i" -presize 0 -postsize 0 -hash "string" | Out-Null }

Remove-PSDrive filesdb


On the 10th row added, the measure-command cmdlet tells me that it took 0.0343507 seconds to add.  On the 500th row, it took 3.343963 seconds.

This trend up generally continues in this direction with the more records I add (though I am still waiting for the 1000 record test to finish).

Have I got something wrong with the structure of the table or the connection string?  Or is this "normal" behaviour?

Oct 22, 2014 at 9:11 PM
Edited Oct 22, 2014 at 9:15 PM
insted if New-Item cmdlet try this

Invoke-Item filesdb:/Files -sql sql_statement

where as a sql statement you are putting "insert into table values ...."
in my table with 33000 records it working very fast