Our use of cookies

We use cookies to tailor your experience, gather analytics, and provide you with live assitance. By clicking "Accept" or continuing to browse our site you agree to the use of cookies. For more details please read our Cookie Policy.

How SQList deals with large SharePoint datasets and replicates them to SQL Server

SQList can export large SharePoint lists with many rows, columns and folders, however it is important to be aware of how it handles this.
 
SQList cycles every 10 seconds; one "cycle" means that SQList has gone through all sites/lists you configured, and replicated the changes onto the SQL server database; then it waits for 10 seconds and starts again.
 
However, no more that 500 rows per list are replicated in a single cycle; this is to avoid taking too much memory/CPU from the SharePoint server.
 
Furthermore, the time it takes to replicate a single item varies depending on the number and type of columns; e.g. 500 items in one list could take less time that 50 items in another.
 
As an example, assuming a list with 1,400 rows, it will take SQList 3 cycles to synchronise the SQL table with the list the first time; subsequently all new updates will be replicated in a single cycle (provided they are less than 500) and usually won't take more that a few seconds.

Be aware that the larger the data set being replicated, the longer the first replication will take. Subsequent updates will the processes very quickly.

Important: note that during this initial phase, it may appear that SQList is not exporting data; that is not the case. Unless there are errors preventing the synchronisation from completing (check the event log), it is paramount that you refrain from stopping and restarting SQList in the hope that it will "unblock" the update: doing so, will cause SQList to start the list from the beginning as it will interpret the previous replication as failed.