My hub took about 50 minutes to save db of IPdbBOT (http://board.univ-angers.fr/thread.php?threadid=5250&boardid=26)'s which is produced in 11 days with a hub of 1000+ users (7.885.887 bytes in size). I have an idea, like seperating IP History and ISP&DNS lookup results like:----------------------------------------------
["213.139.225.18"] = {
["Nicks"] = {
["highscreenn_"] = {
[1] = "08.09.2005 - 11:28:32",
[2] = "09.09.2005 - 16:01:59",
},
["hpcompaqp"] = {
[1] = "09.09.2005 - 07:44:46",
[2] = "09.09.2005 - 16:28:34",
},
},
["ISPName"] = "TR?-?Meteksan?Net",
["DNSHost"] = "Couldn't?resolve?remote?host?name",
},
----------------------------------------------
to:
----------------------------------------------
History DB:
213.139.225.18|highscreenn_|08.09.2005 - 11:28:32|09.09.2005 - 16:01:59|
213.139.225.18|hpcompaqp|09.09.2005 - 07:44:46|09.09.2005 - 16:28:34|
IP DB:
213.139.225.18|TR?- Meteksan?Net|Couldn't?resolve?remote?host?name|
----------------------------------------------
but have no idea if it will speed up save part or speed down whole script..
Any ideas to fix this will be greatly welcomed... Thanks
can you show the routine which saves the stuff.
plop
Sure, I use File Serializing by nErBoStFunctions.Serialize = function(tTable, sTableName, sTab)
assert(tTable, "tTable equals nil");
assert(sTableName, "sTableName equals nil");
assert(type(tTable) == "table", "tTable must be a table!");
assert(type(sTableName) == "string", "sTableName must be a string!");
sTab = sTab or "";
sTmp = ""
sTmp = sTmp..sTab..sTableName.." = {\n"
for key, value in tTable do
local sKey = (type(key) == "string") and string.format("[%q]",key) or string.format("[%d]",key);
if(type(value) == "table") then
sTmp = sTmp..tFunctions.Serialize(value, sKey, sTab.."\t");
else
local sValue = (type(value) == "string") and string.format("%q",value) or tostring(value);
sTmp = sTmp..sTab.."\t"..sKey.." = "..sValue
end
sTmp = sTmp..",\n"
end
sTmp = sTmp..sTab.."}"
return sTmp
end
tFunctions.SaveToFile = function(file , table , tablename)
local handle = io.open(file,"w+")
handle:write(tFunctions.Serialize(table, tablename))
handle:flush()
handle:close()
end
try splitting the table into several files, and save only the changed things.
just this needs some extra stuff 2 keep track of the changes which decreases the overal preformance.
if that doesn't help your gone needs some extra fancy database system (files and folders), so you can minimize the 2 be saved (changed) data.
plop
QuoteOriginally posted by plop
save only the changed things.
how will I do that? I dont have any ideas how to keep track of changes only, and saving file without creating duplicates in append mode...
QuoteOriginally posted by GeceBekcisi
QuoteOriginally posted by plop
save only the changed things.
how will I do that? I dont have any ideas how to keep track of changes only, and saving file without creating duplicates in append mode...
compair the curent vallue with the vallue in the table.
lua flushes duplicate vallue's in tables on loading it, it keeps the last vallue it finds.
plop
Whenever a value changes:
local f=io.open("DB_File","a+"); f:write(changed_value); f:close() end
Hope this helps.
writing files in to only a file in append mode needs a lot of work to be done, so I decided to test/use another method. so now, I need help about writing to files as,
- They will have filenames of current date as sFileName = os.date("%d-%m-%Y)
- On 00:00, writing to current file should be stopped and a new file should be opened with updated current date
and also, if I can achieve this, these files will be appended and this will speed up most of the things..
SO, I need help about writig files in this method.. Thanks for any help..