Mike Jelen
Mike Jelen

Snowflake Tasks – They’re not just To-do’s


So, you want to schedule something to run on a certain schedule. And you’ve looked high and low and used other solutions like CRON, Lambda, etc.

Did you know that Snowflake has a built-in object that allows you to schedule tasks against objects on frequency of your choosing?
That’s right, use it for triggering other processes in Snowflake, checking/updating tables periodically, and more. Tasks can be combined with table streams continuous ELT workflows to process recently changed table rows. Streams ensure exactly once semantics for new or changed data in a table.

Tasks can also be used independently to generate periodic reports by inserting or merging rows into a report table or perform other periodic work.

Check out our Meetup video below to watch an overview of this event:

Other Meetups and Events to check out:

Transcript from the Meetup event:
Welcome to another Carolina Snowflake meet up session and today we’ll be talking about Snowflake tasks. And well, actually it’s a Snowflake task. Just like there is a table object, so you call them table. 

There’s a task object in Snowflake and we call them task collectively. Right. So we’re going to just kind of walk through that. I’d love to hear everybody’s input and feedback if they’ve used tasks before and that type of thing. 

I don’t think it’s a newer feature. I don’t remember when they came out with tasks, actually, to be honest with you. And we’ve been using them for a while on lots of different projects, but I don’t remember when it was incarnated, to be honest. 

And lots of people I find who are coming just from a pure SQL background, they’re not sure how to even use them, maybe still get ETL and there’s some confusion. So I’m hoping that in this session we can kind of elucidate and kind of clarify some of the misconceptions and also talk about some of the powerful uses of text.

As always, meet up rules, just be kind, participate and videos optional. And of course, follow us on Day Lake House on LinkedIn and Twitter. And aseg as well on LinkedIn. Love to hear from everybody.

So I think our agenda is old. This is from the last one. I thought we changed it, but we’re really going to be talking about tasks and very similar concepts. Be looking at tasks purpose. We’re going to do a walkthrough.

So just replace everything you see there with variants, which was our last meet up with tasks from this meet up. And so speaking of our last meet up, yes, if you’re fans of the Disney Loki and you’re part of that meeting last time, we talked a lot about variance and the time authority.

The VTA, I think it is from that show, really cool show if you haven’t watched it. And for those who ask the question, we do have the recordings of all these meetups available about one to three days after we have it. 

And they’re all posted on our YouTube AICG channel. So be sure to check us out over there. There’s a link and we send out the slides, guides and the link to the YouTube video once we have it posted for everybody. 

So no worries there. Check us out as often as possible. All right, just a refresher. Again, keep this super short for everyone coming back in. Snowflake data cloud. Very powerful, almost infinite scale, very fast processing. 

Kind of the world’s answer to doing data warehousing on the Internet right now. Platform as a service. We like it, we recommend it, we think it’s great. It solves a lot of different problems across a lot of different industries, vertical and definitely horizontally across the enterprise.

And we have customers and we know people who are ingesting all types of data that you wouldn’t even think about off the top of your head. And they’re running into that funnel of ELT and ETL to get meaningful information out of their data, create insights and that types of things from all different types of consumption layers, whether it be machine learning, business analytics and so forth and so on. 

So that is kind of Snowflake in a very, very quick nutshell. And we can talk Snowflake and its internals and its power all day long, but it needs to meet up. So we like to just kind of drill down into some specific topics. 

And tonight’s topic is about Snowflake tasks. Tasks. And so let’s hit on some of the general purpose and power of Snowflake tasks. And if anybody has any questions, of course fill the questions into the chat. 

I’d love to hear from, loves to hear if you guys are using. Tasks currently if you’re using actually any part of the Snowflake data pipeline and I consider the data pipeline really anything at the point of ingestion into Snowflake and doing any type of transformation or logic and I consider task as part of that data pipeline. 

Obviously there’s no Snowflake types. You’ve got streams and tasks are kind of part of that framework, if you will. So it has a lot to do with Snowflake clearly and if you look at the definition of a task, it’s really a piece of work to be done or undertaken. 

It’s really I wouldn’t even say a compliment or I would say it’s a counterpart to Snowflake streams or table stream. I can’t remember really seeing it disconnected anytime. We’ve set up streams and there are tasks right there and we’ve got an upcoming topic on Snowflake streams. 

I think we’ve got some links in here for you guys to check out that topic but right now it’s going to drill down on task. It’s definitely great for executing any type of sequel or short procedures on any type of frequency. 

We actually use it for building out some rough incrementally, slowly changing dimensions sometimes in certain situations. It’s a good way to prototype that actually as well to just kind of get feeds of data incrementally within Snowflake itself without having to connect outside and pull things in. 

So there’s some benefits there, I would almost even call it debugging potential, but we can talk about that later. And one other cool thing is you can actually link tasks so there can be a dependency tree there on task. 

And for those familiar with something like Airflow or something where there’s like a cyclic dependency graph in some of the kind of the new orchestration tools that are out there open source or what, or commercial and with tasks you really aren’t creating a true.

Acyclic tree. It’s more just a dependency tree. Right. So you can’t have one task dependent on two other tasks. They can be a one to one reference and you can use session variables and things like that to help drive the task forward.

So it’s actually really cool. I mean when you really think about it, they’ve added actually built a scheduling system for the most part inside of Snowflake. And we’ll keep talking about that. What else do we hear about tasks inside of Snowflake? 

Well, it seems to be actually fairly frequently used once people get to the advanced concepts of Snowflake. I would say we have a lot of companies get into Snowflake from the very beginnings of the journey. 

And even though we have some of the more advanced techniques in mind a lot of times just get the data in there raw, drive towards the requirements, drive towards business value based on getting that data, analyzing it turn into information. 

And as we’re going through that process, all those advanced features have a purpose and a place. But like I mentioned, Snowflake streams, great place using task. There’s actually a built in system function called stream has data and you can tie that into a task. 

Like when the stream has data at the increment the task is running, then you can call that stream and then read from it which basically pulls out the incremental stream from that stream table. So there’s some other cool things there. 

You could also technically use it to build real time views, right? Like if you had a task that’s running every five minutes, ten minutes, 20 minutes, 30 minutes, whatever, two days, an hour, you could actually then like create a view that’s aggregating things up.

So you’ve got this view kind of like a materialized view concept but through test, right? Refreshing object. Same thing, like recreating a view, that type of thing. And then I kind of mentioned this incremental frequency of loading data.

So great for like prototyping, but you can use it in a real production scenario. So that’s kind of cool. And someone actually mentioned this to me, I would almost say years ago at this point, that you could actually kind of mock up this idea of triggers. 

So if you guys remember, if you’re familiar, Snowflake doesn’t have triggers. Most databases I was thinking of Oracle. I want to think of triggers. A trigger is typically something when something happens on the table, you write a trigger for when you want something to happen based on something happening usually in the table. 

So a row is inserted or something like that. And so that concept doesn’t technically exist inside of Snowflake, but you could kind of mock something up, right? So you could create a task that runs every minute and then it can kind of write some sort of query statement that’s probably deterministic. 

And if it pulls back X, then do Y, whatever that is. So it’s kind of like a trigger but not real time. All right, so I’m curious. I don’t know if the chat is open. Let me see. I see a couple of one chat there.

All right, yeah, we’ll absolutely push out the slides, people. No problem since the recording. Absolutely. Okay, so I’m going to take a guess, or I’d even wager that people really aren’t using streams that frequently or task. 

I think it really has to get to more of the intricate details of it using the advanced portions of Snowflake. Otherwise I think a lot of people are probably just going to leverage like their ELT tool or their ETL tool to kind of do some of that lifting. 

And that’s okay, right? Nothing wrong with that. Okay, so let me see. Correct. Yeah. So tasks aren’t real time. So we’re going to jump into that. I’ll answer the question now just for sake of time. So if you see the second bullet, here what we’re going to go over. 

We’re going to kind of do like a real world demo right now. We’re actually going to build some task out and watch it run through the full cycle. And then the second bullet point there, we’re going to discuss Cron.

So if anyone is familiar with Linux and Unix, and old school Linux runs the net right now. So there’s something called Cron job big on Unix and Linux. And so that’s basically like a schedule system. And if I call correctly, like the lowest you can get down in a Cron job is down to the minute.

So you can run the Cron job to kick off a script or something every minute of every hour of every day, every week, every month and that type of thing. And so from what we’ve seen, again, there’s probably a Snowflake engineer that actually wrote the code for this and ready to reach through their monitor and grab me and say, no, what we’ve seen is that the lowest increment is down to a minute.

I could probably read back through the documentation and it probably tell me it’s down to 30 seconds. But based on the crime job logic, I think it’s the lowest you can get to is a minute frequency. So we say real time like we have in the comments.

That gets me back to 20 years and analytics and data warehousing. Let’s get real time. And does real time for you really mean real time or is it near real time? And if it means near real time, then I think tasks has us covered.

But for polling and other things like that, that’s where pipes and other things come into play. So as soon as something, some data lands into like your s three bucket, just as an example, then you’ve got what is it? 

The SQS. Triggering Snowflake to kind of connect that pipe and then it’s ingesting based as soon as something lands inside of S three. So that’s about as realistic time as you’re going to get. Right? 

So hopefully that answered the question. So then we’re going to look at how to link a task and then how to run a task to call a store procedure. Because inside a store procedure you can do some pretty cool stuff, right? 

So we’ll talk about that. So what are the steps we’re going to take? We’re going to create the task, maybe alter the task. We’ll show the tasks that are there and we’ll view the tasks like running process and that type of thing.

Probably not in that order in those steps, but we’ll do most of those steps. All right, let me just check the chat window one more time and see if there’s any other questions popping up. Great questions, happy to see them.

Thank you guys for shooting those over. Okay, a couple more slides, actually. Key observations if you think about all these nuanced sort of objects and how do you really become a super expert. And I think the devil’s in the details.

So for task, there’s just three things. Like our team pulled out one security. So there’s something interesting where there’s actually a privilege that you need to grant a role that is going to create and operate that task.

And so the account admin is really the only role that can grant the privilege to another role. And then whoever has that role can in essence create a task, right? And so that privilege is called execute task.

Then the next one is dependency. So I mentioned in a cyclic graph. So again, there’s only one connection for dependency. So one task. I wanted to be dependent on zero or one task and we’ll talk about how to do that.

And then the last piece, which I always find interesting, I often forget, is that. When the task is actually executing, it’s actually executing as a system service. So even though you might get the access to create the task and when the task actually runs right, based on whatever frequency you set up, it’s actually kind of running, let’s just call it in the background.

And you’re still the owner if you created it and you have the ownership right, but what’s actually running it is a system service. So in essence, if somebody were to delete your profile, for example, in theory your task would still run, it wouldn’t break your task running on the frequency, that type of thing.

So it actually runs as a system service. Cron scheduling. Yes. So this is what I was talking about, actually snagged this one from Google Cloud. But Cron is basically a Linux and Unix function. So if Snowflakes using it or Google or Microsoft, it doesn’t matter, it’s the same structure, they’re not going to change the crown.

So you can kind of see it from like the schedule fields that the lowest you can get down to is the minute. And so if they’re using the Cron process for scheduling jobs, then the lowest you’re going to get down to is in essence 1 minute as the lowest micro frequency.

I could be wrong and I’m hoping somebody chimes in and the comments or gives us feedback on the meetup or a blog or something to tell me that I’m wrong. And if we find that we’re wrong, we’ll post it.

If it’s actually 10 seconds, 30 seconds. So what we’re going to build in this quick hands on demo is just this flow we’re going to do. I said a real time run through, as in like real time on this session with the demo.

So we’re going to have one main task. And I like that even GitHub and everybody, they’re not calling it the master task anymore, they call it main task. So we’re going to take this main task is going to have two sub tasks.

So task A will be dependent on main task and task B will be dependent on main task. And then there’ll be one subtas. So that would be a sub task to task B and task the sub sub task is going to call the store procedure. 

So hopefully that diagram makes sense and that’s what we’re going to do right now. And so let me jump into it. A couple of things that have to get set up. So I’m going to jump over here. Can you guys see the Snowflake worksheet environment? 

Yes. Okay, cool. Let me see if I can blow this up a little bit just for the sake of conversation. Okay. So I’m a step in this and hopefully it’s not too boring or too pedantic. Let me just kind of minimize. 

We know it’s going to work. So you can kind of see my pre test here. So this whole first set is all about creating, assigning the privilege to a role. And as you guys know, security. And so Lake is really all role based.

So what we’re going to do is we’re going to use security admin and we’re going to create this task which I already have created. So we’ll see what if it bombs on us. So it already exists for me. So snow task admin. 

And one thing that we’re going to throw us out in our security best practices meet up in a few weeks or a webinar. But what we have as a best practice is when we create roles, we use underscore all the time to kind of break up the role name because all of the roles from Snowflake are single word. 

So when you’re looking at it, it’s very easy to just discern, hey, which one is a custom role and which one is a Snowflake role. And by the way, if nobody’s passed their SnowPro core test, it’s a natural question of what are all the system rolls from Snowflake out of the box. 

Just FYI if you’re taking that test. Okay. And then we’re going to go ahead and use that role because we want the account admin role to go here and to grant the execute task privilege on account two. 

This new role that I created. We’ll do that and then of course, as you guys know, we use the security role to then actually do the grants to our customers. So we’ve got two customers here that we already have and we’ll just assign those and everything should succeed. 

So that’s good. Alright. And then of course, just using standard Snowflake syntax, everybody should be doing this when you’re writing scripts, of course. So here we’re just going to make sure we’re using the correct database and the correct warehouse just in case anything changed as we’re navigating from worksheet to worksheet. 

I’m going to go ahead and use my key role, the data lake house role, and then I’m going to go ahead and create the schema that we’re going to work in. So just kind of go and create that. Okay, now we’re getting down to brass tacks. 

So we’ve got all our housekeeping out of the way, we’ve got our execute task privilege, created it’s assigned to our role and we’ve got our schema. So now we’re going to create this kind of just really dumb demo table.

It has three columns in it. These all are just very generic, right? There’s no great business purpose behind this table. So row option, row status, and then row timestamp, right? So we usually use TS for timestamp suffix naming. 

So go and create that table. Now this one’s interesting. So what we’re doing is we’re just creating a store procedure using standard Snowflake syntax. But what the store procedures actually going to do is this syntax here, it’s actually going to insert into that table an incremental value on the max record with the filter, I think, or spoke as a filter. 

So it’s basically just going to do an incredible because what we want to do is we want to go from. In this logic. I don’t know if I was clear on it. So we’re going to run this main task and it’s going to basically insert a value into that demo table. 

Then it’s going to run these two tasks basically the same time. And these two are going to insert a new record. And then this one here is going to look at the value from this record and increment to buy one by calling the store procedure.

So very basic, very generic test and logic to follow. So if I run this right now, we’ll see that if I do run it actually inserts a record. That’s what we’re going to do from this procedure. And if we do a quick select all from this table, then we should see this value, just one because that’s what our logic has.

So it’s just going to insert one. We’ve got status, okay. And it gives us our timestamp so we’ll know when rows are being inserted or being updated and that type of thing because of the timestamp. So let’s go and create our procedure. 

Remember and Snowflake, the worksheets are fixed ended at the semicolon. So all I have to do is click anywhere within the semicolon or private semicolon and then do command enter and that runs the command that’s in front of the semicolon. 

So that created the procedures you see there. And so I’m just going to test it one more time, make sure it works. Live demo and we’ll see. Okay, so that actually worked and it returned true because I’m actually returning a boolean value. 

And let’s just go back up real quick and run this and see what we have. Okay, so you can see here that this is what happened when I ran the logic first time. And then here it actually took this value and then did a plus on it.

So just incremented it by this number one. All right, so now let’s go and create those tasks. Hopefully I can kind of still see the logic here. So. Now I’m just going to create this task. So what are we doing?

Create or replace task. This is the task name. Again, I use underscores. And all of our logic is the best practice. We’re going to use this warehouse, right? So think about the warehouse is important because remember, when it runs, it’s running behind the scenes.

So you don’t get to set the warehouse later on unless you run an alter statement. But if you don’t set the warehouse, then it’s just going to probably pick the default warehouse. Or actually, I’ve never actually run it without the warehouse, so that we should actually try that if it breaks.

But anyway, you want to find a warehouse so you know under what compute it’s going to run. And then you can see here we’re running with a two minute schedule. I could run it with a cron job schedule, even based on time zone.

I just have that commented out. So if anybody wants to really see that run or get into that discussion, we can definitely do that as well. So I’m just running it every two minutes. And then you can see here.

So create the task with this warehouse schedule for every two minutes. And then after this, as is where you put your logic. And what we’re doing here is we’re just inserting a value, inserting a one.

We’ve got this status so we can kind of know it’s coming from the main task. And then we’re doing the timestamp. So when I run this again, it’s just creating the task, right? It’s not going to run it.

So when I want to execute this line, it’s just going to create the task. And then here’s our subtask A. And what I’m doing here is I’m creating the status of task A, okay? With a value of 1000. And so I’ll run that. 

Now, notice this one clause here, because it’s dependent on the main task. So this is saying after. It doesn’t say run after execute after. I like the simplicity of Snowflake. It just says create, replace task. 

This is my long task name, warehouse after task, demo maintask, which is this task that created. So it will know to run or execute after the main task. Completes, which is going to happen every two minutes. 

Same thing with subtask B. You know, if you revert back to our diagram B, it is going to run after task, the main task. So let’s run that again. Run this one and it’s going to put in a higher incremental value.

Just we can separate them out and subset to be okay. And then the last one that we’re going to run is this guy. I often like testing. I think I have a QA background. I love testing the length of object naming that you can do.

So this is a pretty long one. And so this one you can see it’s going to run after task B. So here’s our task B dependency on the main task task. How many times can I say task during this session? And so this one is going to have to be it follows our graphing dependency chart.

And you can see here. Now the logic is a call to the store procedure with an incremental value of one. Okay, so let’s go ahead and run this or create this rather. All those have been created. Now in essence, it should be running right now.

The clock should be taken. So what I can do is I can go here to show tasks and I should see all four of my tasks. I can see it’s scheduled, I can see it suspended. And it stays suspended because it’s just not running right now. 

Unless we have a process that’s taken a long time to run and Snowflake, that’s probably pretty hard to do unless it’s an intense task. I think it’s always for the most part, by the time you query this show task, your state of the task will most likely be suspended, if that makes sense. 

Okay. And there’s some other logic over here that we can take a look at. All right, so I’m going to go and start looking at the table. So. So that’s kind of the key part. That’s how you create a task at a table, a stored procedure and a set of tasks that have dependencies. 

We’re going to show how we can actually see dependencies here in a moment as well. So let me just increment this one more time. Make it bigger for everybody. So I’m going to run this and now we can see that we’ve got the four rows.

No, sorry, wrong test. Let me run this. Okay, so we still just have the two. All right, fingers crossed this is actually going to work. And here I’m going to do a test. So I’m going to run where it’s equal to main.

Okay, so when we know the main task fires, then we’ll be able to see it here, right? But right now we have zero, we can suspend tasks. So once we get this going we can say, hey, either the main task is just completely suspended or a sub task we can suspend so that doesn’t run.

And so any time you can turn one of those off, that’s really great if you think about it. Like let’s pretend you’re building a database or something like that and you’ve got some dimensions and you’ve got some fact tables and you don’t want to kill your entire flow for some reason you’re actually building your data are using tasks which I might advise against.

But you can just suspend one part of that tree if you want it to, if it is all built on task, for example. So that’s kind of cool. So I won’t suspend that. Let’s see if anything kicked off. No, not yet.

So here what we can do is we can do a resume statement. I’m just going to kind of walk through this and we’ll kind of open it up for discussion and see if things are working correctly. So I’m going to resume this and I’m actually going to go to the main task.

Let me just create one here. Let’s go and alter task. Main task resume. So now if we wanted to let’s one more time if we have anything still just a two tasks, I think it’s been about 2 minutes. So let’s confirm some things.

Okay, the next thing that we’re going to do is kind of get a little bit more advanced so we can actually look at the information schema. Information schema is super important. Pretty much every database, every good database has some type of information schema where you can look at the metadata and here Snowflake actually has one called Task dependencies.

And notice this here. This recursive value is basically false. And looking at the main task is kind of the top one. What are its dependents? So if we go here and run this query, we can see that we have two dependents.

So we have our first task here, which we can obviously look at the schedule because it’s the only one that’s scheduled. And as far as a predecessor, there’s nothing above it. That’s our top task. Right?

So this is your standard like parent child hierarchy and we can see that illustrated. There’s nothing above main tasks, so we know that’s the top task. And then here we can see that predecessor has a value for the other tasks.

So get the task name. So there’s your A and there’s your B and of course that’s correct as well, a and B. But what we don’t see is subsub task and that’s because this is just showing the initial dependency because we have a recursive equals false.

If I make this true and run it, but instead of three records, I should get four and see we got started. Now look at that. I caught it. There’s a question in the chat. Yes, there is a lot. I’m going to take a screenshot of this real quick because that’s like the lottery that actually caught that and started state.

Yes. So there is a task history. Pretty much sums it up for me. I don’t think. Okay, if you guys got the dependency piece, that’s helpful, right? But trying to see if there’s anything. In the history that we can discern as a problem or anything like that.

So I found this takes a little bit longer to run. I think it’s because it’s going through logs. I really haven’t used it a tremendous amount because a lot of times when you’re looking at the history you’re kind of tracking like issues, dependency.

Because in theory all of your tasks if they’re on a graph you could just pull out all the business logic and then just run the business logic step by step by step by step in the corresponding session.

Variables or anything else. But this would be it, this would be your answer. So great question on the login. So if you want to find out if your tasks succeeded or failed or when they failed or perhaps even why they failed, you can go into the information schema task history.

And the cool thing about this, because it’s a lot of metadata, you can actually start looking at things like based on the penalty, you can do like a time range or schedule range and just like any other query, like do that core analysis by filtering and predicating the data with things that are related to it.

So this is pretty cool. This will be the output of something. But notice this is super small example, nothing crazy and that’s 20 seconds to get after that information scheme. So that might give you some indication of the power of tasks.

But if you use them very widely for ETL ELT, that type of thing or big transformations, right? 20 seconds, that’s a long time. I mean, this is an extra small warehouse. Let me just clarify on that piece, right?

Most customers we work with are smaller, probably at the minimum for any production type compute. So I’m sure this will come back faster. In most situations. Okay, let’s see if there’s any other questions here.

So let’s do one last check and see what we have in our table now. More than two records. Yes. Awesome. Okay, so we can can see that the main is actually kicked off a couple of times. But, you know, the problem here, I think, is I don’t know if I actually resumed everything or started everything to begin with.

So I don’t have my core task resumed or started. Actually, let me do that real quick and then I’ll get to talking on some other wrap up points and then we’ll come back and check to make sure everything was actually kind of working.

So let me copy this real quick. I thought I had that in my scripts above, but I guess I did not. So run that. And then we’ll get this one here. We’ll just make sure those are enabled and they’ll resume, and then we’ll start getting some fine points.

So we’re going to resume all these update graph with boot suspended. Okay, that’s what we wanted. So they all should be running right now, or at least being able to resume when the two minute mark comes.

So we’ll check back in a minute. In a few minutes. Okay, so we have 20, 315, 35. Okay, so we kick it off like two minutes. So let’s go back to the PowerPoint here. All right, let me see a couple more questions.

Will depend on tasks kick off even if predecessor fails? No, because there’s an after statement. So it shouldn’t run because it would only run after that one succeeds. If not, anyway, we can ask it to execute upon attempt of previous irrespective or whatever, successful or not.

That is a great question. That is a really, really good question. I think that we might have to actually set up and play with like failed on purpose and then see if it goes to continues down the the pipeline.

And I think if that is the case, there could be some workarounds, but I don’t know if there’s an on failure. You might be throwing me a soft pitch on that one, but I don’t actually recall if there’s an on failure continue type logic.

All right, yeah, I will look that up after and try to provide some comments on that. That one’s good because you would think you would have like, one failure on success, that type of thing. So I will confirm.

I just don’t recall using it. So let me see. Yes, absolutely. So let’s just drop into what we did. So we walked through, we created a task by providing first way to create roles, created a temp table, and then we have all these other steps after we created our task to go ahead and make sure that our task was running.

We could do the task history and logs and then we could obviously validate some things, enable suspend the task and that type of thing. And really, that’s for the most part a productive discussion and walkthrough on probably 95% of use cases for tasks that are out there.

Right. Well, take it back. Probably 80% because we didn’t go into table streams. We’ll cover that in a few months and then we’ll bring tasks back into the conversation. So what we went through today covers probably about using tasks.

So again, this review and wrap up anything else here to think about? I think we covered a lot. About tasks. That was good and we have some great questions. So what we’re going to do is we’re going to post our code out on GitHub like we usually do.

This is the link to it. Be sure to go out there and give us a star. Invite your friends to kind of check it out as well. And then if there’s any issues or any questions like you have, you can go and post them into the issues.

We’ll be checking those and responding to those issues and clearing them out. So if we do have like typos or something that’s not working or something that good questions you want to see, then we can go ahead and we can respond through our GitHub on that.

One more question comes in before we wrap it up. Last one. Anyway, to start a test out of schedule, something along the lines of Alter task start does not equal start. I’m not sure I fully understand that question. 

Anyway, start test out of schedule. Test. Yeah, you can do an alternate yes. You can run the alter statement and then reschedule it. As a matter of fact, what you could do if you wanted to, because everything in Snowflakes, mainly SQL, you could create a task and then it has a sub task as a stored procedure. 

And that stored procedure could in theory look up a table that has a different incremental time and then you could alter the parent task that’s on the schedule with the frequency that’s in your table that was called by the procedure, something like that. 

Was that an answer? She told you a question? Okay, awesome. I should have just let you ask it in your own words, but no, the chat works as well. Very cool. Yeah, very flexible. They built a really fantastic system where everything is kind of so many ways connected and tied together.

But that’s a really good question. That make a great blog post or webinar, like, on that one question alone, like the interconnected correlation of tasks like dynamic task scheduling. Right? That is cool.

That’d be a good one. I’ll tell the team about that one. That’s a very technical article, though, but one probably worth doing. Well, as always. We have our discussion and Q and A session, but I know people are dropping off. 

If you guys have any questions, we can stay on for a little bit and just talk through chat. As always, our DataLakeHouse project is in full swing. We’re actually releasing DataLakeHouse GA in October. 

We’re still looking for companies to beta test DataLakeHouse, but you can actually go to DataLakeHouse.io. We should probably have the link in here at this point, but you can go to DataLakeHouse.io and check it out. 

And we’ve got a free trial coming up, but again, we’re looking for beta testers to anybody’s loading their data into Snowflake and wanting to do some really amazing things. That is the platform. And then we’ve got some upcoming events, as usual. 

So we’re scheduled out. Heather, how far out are we scheduled for the next three, four months? Yeah, we’ve got two more on the schedule for now, and I’m probably going to go in tomorrow and add two or three more after that. 

So next we’ve got Snowflake streams, and then we’ve got security Best practices after that in October.

Yes, that sounds great. And we’ve had a lot of demand for bringing back more Python with Snowflake. That seems to be the most requested sort of integration and conversation topic. If there’s anyone who has any good business use cases they’re looking for us to tackle integrating and Python with Snowflake, let us know. 

We’ll be happy to put something together related to that topic based on your suggestions. Well, if there’s no other questions or topics or anything, we can go ahead and wrap up and thank everybody for joining us in this August Snowflake Carolina Meetup group. 

And thank you guys for coming and by friends and colleagues for the next one and look forward to seeing you there. Thank you so much!

More to explorer

International Women's Day 2024

International Women’s Day 2024: Empowerment and Progress

As we commemorate International Women’s Day on March 8th each year, it’s a time to honor the resilience, accomplishments, and contributions of women worldwide. In 2024, this day holds particular significance as we take stock of the strides made, acknowledge persistent challenges, and recommit ourselves to the pursuit of gender equality.

Bank grade security

5 Steps to Configure Key Pair Authentication in Snowflake

Key pair authentication is a secure way to access your Snowflake data warehouse without relying solely on traditional username and password authentication. In this step-by-step guide, we will walk you through the process of setting up key pair authentication in Snowflake. We’ll also cover how to install OpenSSL, a crucial tool for generating the necessary key pair.


Streamlining Your Bullhorn CRM: Mastering Duplicate Data Management

Discover the most effective strategies for eliminating duplicate records in your Bullhorn CRM. Duplicates can hinder your productivity, lead to data inaccuracies, and impact your relationships with clients and candidates. In this insightful session, we will guide you through best practices, cutting-edge tools, and proven techniques to ensure a clean and efficient CRM database.

Scroll to Top