Replies: 2 comments 2 replies
-
Yes, getting a session from ODA can take a while, so I understand wanting to address that for your app. Have you used threads in python before? You could create a pool of SAS processes to reuse, or just asynchronously start new ones to use one time, but have then pre-started up before processing a request. At the end of a given request, asynchronously start a new session, while ending the one you used and returning from that request, so the new session is being created before the next request. Pre-populating a pool of these connections to use as requests come in could eliminate the delay of starting one. Are you using threads to process more than one request to your app at a time, or is each request processed and completed before the next can start? |
Beta Was this translation helpful? Give feedback.
-
That's great! I'm glad that worked easily. As far as 'how long can you use a server', you're connecting to a SAS 9 Workspace server, so there's only a timeout configured in metadata for 'Inactive Client Timeout' which isn't a max amount of time you can use it, but rather if you haven't used it in that long, it will shut down the server. So, as long as you use a server once w/in that limit, over and over, you ought to be able to use it indefinitely. And, as far as how many SASsessions can you have at once, SASPy has no restriction, but I don't know if ODA has anything that limits you to some number of concurrent sessions or not. In regard to performance improvements, I'm not seeing much for this. It's just a submit of code and a data transfer for the resultant table. Data transfer (sd2df() or df2sd()) will take time based upon the size of the data. It's a multi step process to gather metadata about the input data then generate code to move the data from the server to the client (or other way for df2sd) then initiate that transfer, creating the output data on the other side. Running SAS code by itself in SAS is the quickest, as it's just running in SAS w/ no other back and forth. So, the one thing you can do in that case, it make the data you're transferring be no more than what you need. I don't know what's in the table you're moving or how big it is, but if there's more to it than you're using on the python side, only transferring what you need could be a way to decrease the transfer time. That can be done on the SAS side when creating what you're pulling over. Or you can use dsopts= on the sd2df call to provide sub-setting, if that's easier. Again, not sure what's in the table or if there's only a subset you need on the python side, but that could be a place to make a tweak, if so. Tom |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I have a flask app using saspy that I deployed and make api calls to from my website. My previous strategy was to initialize and end the SAS session within each route of the flask app. I recently tried making a global sas session, which cut down the api response time significantly. But I soon learned that this creates issues related to SAS session being shared across multiple requests, or from a lack of appropriate memory management in the web application.
What are some ways of bypassing the long time it takes to create a sas session each time an api call is made?
Beta Was this translation helpful? Give feedback.
All reactions