Page 1 of 1

Post changes from VirtualTable to SQL Server

Posted: Fri 26 Jun 2009 10:32
by johncmolyneux
Hi.

I want to perform a quite straight-forward task, but can find no help in doing so.

I'm using an MSQuery to get data from a table, and assigning the data to a VirtualTable (it needs to do this to mimic the complete functionality later). I'm then editing the VirtualTable via DB aware controls, and I want to post the changes back to the MSQuery (and therefore the DB).

How do I do this final part?

Thanks in advance,

John.

Posted: Fri 26 Jun 2009 11:13
by Ludek
I would also appreciate some simple solution to this issue. I would use it to solve my old problem described in
http://devart.com/forums/viewtopic.php?t=13289

Posted: Fri 26 Jun 2009 11:24
by johncmolyneux
After reading the linked post from Ludek, I decided to give a bit more background on my current problem, so as to explain my requirements.

I have a server/client application setup, where the server has DB access, and the client does not.

The client can request data (with an SQL query) and the server creates a VirtualTable and sends it via TCP to the client.

The client then displays the data and allows the user to modify it.

If the data was modified, it is sent back to the server via TCP, and recreated as a VirtualTable there.

The final step is to take the VirtualTable and apply the updates to the original MSQuery so that it can post the changes to the database.

I was originally doing this using ADO and a KbmMemTable. I found SDAC a lot faster than ADO, but the VirtualTable only slightly faster than the KbmMemTable. At least with the KbmMemTable I could apply the dataset back to a dataset object and then post the changes. I cannot find any way to do this with the VirtualTable, hence asking this question.

Hope this makes sense, but please ask if you need clarification. I look forward to your reply.

Thanks,

John.

Posted: Fri 26 Jun 2009 11:43
by johncmolyneux
Okay, so this turned out to be so easy that I feel a little silly now (not really :twisted: )

I did the following...


MSQuery1.Assign(VirtualTable1);
MSQuery1.Edit;
MSQuery1.Post;


MSQuery1 was the original dataset that I retrieved from the database, but it could be a reconstructed version, made by simply calling the same query again.

Ludek - hope this helps you out.

John.

Posted: Fri 26 Jun 2009 12:26
by Ludek
hmmmm, now i see, our problems are different. i just would like a dataset class, that does not necessarily read its data from database, but can write the user made changes to database.

even the cachedupdates property and onupdaterecord event in the tvirtualtable class could help me much (+ ability to fill the tvirtualtable from tmsquery, but as i see now, this is probably already possible)

Posted: Fri 26 Jun 2009 13:20
by johncmolyneux
I think what you're asking for is a definite no-go. If the table you wanted to eventually post to had ID fields (for example), how would they be handled in a generic manner?

It's quiet a strange thing you're asking for.

The only suggestion I would have, other than "where 1 = 0" which obviously parses every row of data, would be to do a "select top 0 * from table". At least that gets an empty dataset which can then be posted later, without the overhead of getting a load of data which is ignored.

Posted: Fri 26 Jun 2009 14:17
by Ludek
why strange? imagine one master, 20 details, slow connection over vpn (one roundtrip ca 100 ms), and a call masterdataset.append. that means 20x opening a query = at least 2 seconds only to deliver no data. (local master detail is not possible, as the details table has millions of rows).

i don't use ms sql server generated identity or any other specialities, no problem can appear.

Posted: Mon 29 Jun 2009 07:06
by johncmolyneux
I must be misunderstanding your problem, because I'm not seeing a reason why you would open 20 queries to the master table, in order to append 1 master row and 20 detail rows.

Could you not create an insert statement that handles all the inserts in one go?

Posted: Mon 29 Jun 2009 12:37
by Ludek
user in 95 % makes master.append using a button and then fills the 20 detail datasets (i.e. grids) in cached updates mode. than he saves everything in one transaction.
in 5% he calls master.edit and then edits everywhere data (in master, in clients, everywhere).
i want to save the 20 query openings in the 95% and still want to have the 5% of editing possible. (imagine writing invoices: you nearly always make new invoices(=master.append), with positions(=client.append), serials (...), payment options etc. an just very occassionally want to edit older ones (=master.edit, client.edit, client.append, etc.))

I don't need to send query.open to the client datasets in the master.append mode, as I have persistent fields and know, that i won't get any rows.

Posted: Mon 29 Jun 2009 12:42
by Ludek
I mean 20 detail datasets, not just 20 rows. each dataset has different fields from different tables, shown in different grids (well, 20 is slightly overestimated, but 10 could be a realistic number)
and the 10 queries are not simple, there are sometimes many (10+) joins, some joins even on (unindexable) views, so even with "top 0" and in lan is their execution sometimes slow.

Posted: Mon 29 Jun 2009 13:40
by johncmolyneux
I think that realistically, with a design like that (and I use the word design generously!), you're gonna have to bite the bullet and either redesign it in a more friendly manner, or you're going to have to create the empty datasets client-side, by adding the field defs either at design-time, or by code at run-time.

I don't think that these components are lacking anything that could help you with this. I don't think anything other than a redesign will help you with this!

Posted: Mon 29 Jun 2009 15:11
by Ludek
well, if you tell ALL our customers, that they can't next time do all the things they were used to, i would be really happy, believe me :)

sorry, but i CAN see here a solution posibility on sdac side. And i know, i am able to redesign everything, without tmsquery at all, just everything on-the-knee. i just don't have time for it... :( and we are in 2009, not in 1990 :)

i can also tell you, that copying data between tvirtualtable and tmsquery is silly, that you have to redesign it and avoid it :)

please, lock this thread, it is all pointless...