I am currently trying to write a small topology optimization algorithm in Teddy. This involves writing large arrays with over 5000 items with “sto” into the cdb. Saving 6 arrays with this high number of items takes far too long (approx. 15 - 20 minutes) for an effective optimizer.
Apart from the fact that there are surely better ways than writing such a tool in Teddy with cadinp, I have a few questions:
Is it normal that saving large arrays takes so long (regardless of the “programming language”)
Why does it take so long to write the arrays into the cdb?
Is there a way to store and access the arrays more effectively or faster?
Try to reallocate array at start, then write your data, e.g.:
+prog template
head variable
del#A ; sto#A(5000) 0
loop#i 5001 ; sto#A(#i) #i ; endloop
prt#A
end
The example evaluate about ~1-2s with & without allocation SOFiSTiK 2024. It’s quite strange that writing a sto variable took about 20 min. You probably hit endcase or there is a problem with your software/hardware. Which version do you use? Try to reinstall software first of all.
I work with the Sofistik version 2025-0 Build 135 (student).
I have completely reinstalled the software.
Your example also calculates very quickly for me. Maybe my problem is due to the way I save and calculate with the arrays. I have an example here from my code that calculates in about 15 minutes. The number of quads is 7449. Total number of all stored Items in the cdb is 37652. The first prog template are very fast. the last one takes about 14 minutes:
+prog template
head Alle Variablen löschen
del#*
end
+prog template
head Benutzereingabe
//Optimierungsparameter
sto#volfrac 0.4
sto#rmin 5.4
sto#penal 3.0
sto#lc 10
sto#max_iterations 10
//Materialparameter
sto#E_max 1 $normiert
sto#E_min 0.001 $normiert
end
+prog template
head test 5
//Auslesen der Quad-Kräfte und speichern in Array
let#lc 10
let#i 0
let#cdb_ier 0
@key quad_for #lc
loop#i 100000
if (#cdb_ier<2)
sto#qnr(#i) @nr
let#nx @nx
let#ny @ny
$let#nxy @nxy
sto#f_quad(#i) (sqr(#nx^2+#ny^2)) //Resultierende (Ohne Schub)
endif
endloop #cdb_ier<2
sto#anzquad #i
//Array - Standardized element densities x=[0,1]
sto#x(0:#anzquad) #volfrac
end
+prog template
head test 6
//Auslesen der Knotenverformung
let#lc 10
let#i 0
let#cdb_ier 0
@key n_disp #lc
loop#i 10000000
if (#cdb_ier<=2)
sto#knr(#i) @nr
let#ux @ux
let#uy @uy
let#uz @uz
sto#u_knot(#i) sqr(#ux^2+#uy^2+#uz^2)
endif
endloop #cdb_ier<2
sto#anzknot #i
end
+prog template
head test 8
//Zuordnung Knotennummern zu Quadnummer
let#cdb_ier 0
@key quad
loop#i #anzquad
let#quad_knr1 @node(0)
let#quad_knr2 @node(1)
let#quad_knr3 @node(2)
let#quad_knr4 @node(3)
loop#ii #anzknot
if #quad_knr1 == #knr(#ii)
let#u1 #u_knot(#ii)*1000
elseif #quad_knr2 == #knr(#ii)
let#u2 #u_knot(#ii)*1000
elseif #quad_knr3 == #knr(#ii)
let#u3 #u_knot(#ii)*1000
elseif #quad_knr4 == #knr(#ii)
let#u4 #u_knot(#ii)*1000
endif
endloop
let#u_quad (1/4*(#u1+#u2+#u3+#u4))
let#ce (#u_quad*#f_quad(#i)) //Ce = ue * Fe (Nachgiebigkeit Element)
sto#dc (-#penal*#x(#i)^(#penal-1)*(#E_max-#E_min)*#ce) //Sensitivität
endloop
end
The last template has painful double loop
loop#i #anzquad
loop#ii #anzknot
for every quad it goes through all nodes.
You can make a counter that checks if quad has already 4 nodes found program exits second loop, which should speed up it and reduce like half of the time.
`Run this`
+prog template
head variable A allocated
del#A ; sto#A(50000) 0
loop#i 50001 ; sto#A(#i) #i ; endloop
prt#A
end
+prog template
head variable B nonallocated
del#B ;
loop#i 50001 ; sto#B(#i) #i ; endloop
prt#B
end
What I found the problem occurs when your arrays are bigger. The code you provided gives 2 second for alocated array and 2 second for nonalocated.
However when you make 50000 elements array instead of 5000 intuition tells you will have to wait 10x longer which equals to 20 second.
Unfortunatly it is not like this and for allocated array you need 76 seconds and for nonallocated after 700 seconds I have terminated the process… and when I checked there were ~19k of elements written.
Probably it is due to sequential nature of access to database, every next array element has to go through previous indexes to be saved, so time needed grows expotentially.
Anyway, would be nice to know another more efficent way to save large arrays to database.
Thanks for the hint! This will definitely speed things up. But unfortunately the calculation still takes far too long. Are there perhaps other ways to save and retrieve arrays faster in cadinp using a sys command or something similar?
I think the longer calculation time for the second prog template is due to the fact that array #A is still stored in the database. This means that the write process in the cdb only starts after the 50,000 values of array #A. The program then probably has to pass all 50,000 values of #A each time a new value of #B is written.
Try this out. With this, both write operations took me a similar amount of time (#a 80sec, #b 86s):
+prog template urs:71.3
head variable A allocated
del#* ; sto#A(50000) 0
loop#i 50001 ; sto#A(#i) #i ; endloop
prt#A
end
+prog template urs:71.4
head variable B nonallocated
del#* ;
loop#i 50001 ; sto#B(#i) #i ; endloop
prt#B
end
prog template
head test
sto#z 100 ; sto#b 300 ; sto#a 400
end
The naming of the variables does not play a role in the order in which they are stored. You can also look this up in the cdb under key “0/100”. The variables are saved in the order in which they are saved in the code.