A Go developer‘s guide to building on AWS

And uh yeah, welcome to re invent. Uh thank you for coming for the session. I am Fuzz, also known as Mohammed Fazalullah Qudrath. It's super easy for folks to know. I am a senior developer advocate at Amazon Web Services and I'm here all the way from dusty and, and hot Dubai and awesome and cool Vegas.

Today's session is, is going to be about developers guide to how to get started with Go on AWS. A lot of this is based on conversations that we had with customers, especially folks who are building Go applications, deploying them on different kind of compute platforms. But then sometimes they just need a kind of a direction of sorts of how to get started with AWS. What's the different things that's available? This is a level 200 talk. So I expect a lot of resources in general and we're going to be talking on a lot of topics. So it's going to be broad, we have a lot of references that we're going to be pointing to and also sharing that later, also special mention to my cospeaker who couldn't make it today.

Um Abhishek Gupta, he and both of us actually, we built this, this talk and he put in a lot of effort for this. So a lot of what you're going to be seeing is stuff that he has also built. Um so we'll probably be sharing a context later, you can reach out to us and then, you know, if you have more questions, we can take it up over there. Cool.

So to get started, just a quick show of hands, how many of you are actually good developers or enthusiasts like me? Ok. So I'll assume the rest of your Java developers then. Um I mean nothing about Java. I actually started as a Java developer. I still do Java a lot of times, but I am an enthusiast at least in the Go world because of the kind of things that we see with Go being built these days, right? And we're going to talk a bit about that in the next couple of slides.

Now, these are common questions that you see and this a conversation that we have had, I've had this when I used to be a solutions architect. I have this. Now when I'm a developer advocate, also a lot of developers and builders that I build that I meet. And a lot of this is about, you know, like what's the kind of support that provides for Go applications and then things like, ok, where do I deploy my Go application? I mean, yes, I know about virtual machines. Yes, I know about containers and Kubernetes. But is there something more that AWS can provide? And we gonna be talking about one of those services then also about serverless development. That's something that is always taken as one of the options that folks want to choose because they just want to focus on the code versus you know, trying to manage the servers and manage the infrastructure.

And of course not, not a surprise generative AI is the norm these days, a lot of a lot of folks are coming in and asking about how do I actually build generative AI solutions? And we're going to actually have a demo about that. We're gonna also go through a bit of the code. So if you're interested in how to use generative AI on AWS, stick around for that one, we also will touch a bit about databases. What are the different databases that are available and also from a tooling perspective, what do you actually have at your disposal when you want to build and then deploy on AWS?

So the references are all on a GitHub repo. This is pretty straightforward. I'm going to be sharing this at the end of the talk also, but just just as a reference you can have this because there's a lot of links, there's a lot of pointers to other resources that we're gonna be having because this is a lot to cover in one session.

Cool. So quick primer for Go and for the folks who are not Go developers yet is that it's a statically typed compiled language that was introduced by Google in 2009. And today, I mean, over the years, if you see it's heavily used in APIs, it's heavily used in building things like the infrastructure, it's used in dev ops and SRE teams. If anyone's used Kubernetes, Kubernetes is built in Go, and also we see it being used in data processing in a lot of places like, you know, for ETL scripts and for pipelines, for example, Go has been a great fit for cloud native applications and it's because of its nature of being you know, like statically typed and also it's compiled, you actually get great performance, you also have a standard out solid, standard library and because it can compile to a single binary, you can take it and then deploy it anywhere directly, right? You get all the dependencies all bundled up together with the runtime and then you can just run that wherever you need to. So it's easily shippable and the language has been stable. It also focuses a lot on backward compatibility as much as possible.

Um so, and it has been seen over the years with that. So one of the things that we see as a developer, if you're getting started on the AWS cloud and you want to start using it with the language of your choice, you actually look at, ok. What's the SDK support that the software development kit? And that's kind of like where everyone gets started on the cloud. I still remember, I mean, at least in my Java days and then a bit of .NET later was that I actually was introduced to AWS through the SDK. And the SDK is basically libraries for building on AWS and this is available in a variety of languages that's, that's out there. And it's open source, you can actually have a look at the code on GitHub. And there is a consistent feature set right across these individual libraries.

So let's say, for example, you are, you are on one language and then you move to another language, you can probably find the same feature set over there. Also, of course, the signature of the function, the method call would probably change. And for other reasons and Go for example, a lot of the, the calls that you use. If you notice in the SDK, a lot of it uses pointers. And that's also because it takes advantage of certain things that Go provides quickly just as a refresher for what the request life cycle looks like in an SDK.

So the idea is that you as a caller, probably you have some code that's running and that connects to an SDK client, it will send an object and that gets serialized into an HTTP request to the cloud so you have a JSON object and object that actually gets sent across to the cloud. And once you receive that response SDK client, then unmarshals that and then afterwards makes it into an object that can then be consumed back into your client. So pretty straightforward approach which allows you to continue functioning within the language without you being exposed to a lot of the network calls, for example, right? You just have to do a function call and then under the hood, everything else is taken care of for you.

Now, with, with the SDK for Go, this has been available for quite some time and this, like I said earlier about the SDK in general, this provides utilities, it provides ways to actually build applications faster. And this is something where recently we just announced that we are following the cadence of Go releases now for the SDK versions also. But the thing is this, right, it doesn't just stop over here. The SDK is just the first part of how you get started with Go on AWS.

So once we know, ok, I've taken the SDK, I've actually built the code, I've compiled it and I can probably test it out a bit to just stop it out and see if it works. I actually asked the next question, right? Where do I get started? How do I deploy my application? Where do I deploy my application? And from a compute perspective, this is basically what you have already from AWS.

So we have from a virtual machine perspective where you have a virtual, a virtual machine of EC2 is basically a virtual computer platform that allows you to host an operating system. And think from the hypervisor, everything from the hypervisor down below is actually managed by AWS. And then over here, the unit of work is actually a virtual machine.

Then you move to the containers where you actually have a container that you built from your own. You may probably you have a Dockerfile, you build an image and then afterwards you deployed on a container orchestration service. In our case, we have ECS, which is our homegrown container orchestration service. And you also have EKS which is Kubernetes also in both of these options. You have a serverless approach which is Fargate, which allows you not to focus on the infrastructure side of things. You leave that with AWS and then you just provide us the container image and give us a bit of configuration. Then we do the rest for you.

Now, there is a hybrid approach which allows you to have the best of both the worlds. And we're going to talk a bit about that in a run in the next few slides and App Runner is a great fit for running web apps and also APIs. So the idea is that it is a fully managed service, it abstracts a lot of the deployment side of things for you, not only from the from how the container is deployed, but also how it is accessed from the outside world. And there's a lot of language runtimes that are supported, Go being one of them.

And from a technicality perspective, this is how it works, right? So for example, let's say if you are in the container ecosystem and you have been using something like Fargate on AWS, this is what it would look like from a management perspective where we manage the Fargate agent and everything below that, right? And all the way down to the hypervisor and then below to the data center. Now when it goes to App Runner the responsibility actually grows.

So from an AWS perspective, what happens is that we take care of the load balancer, we take care of the language runtime, we take care of the deployment side of things and all you have to do is just manage the application code, right? So you build the application code, you put that in a repository, you point App Runner to it and then every time you push an update, then that would trigger a build and then that gets deployed on AWS. Even the auto scaling of containers is also taken care of on App Runner depending on the kind of configuration that you said.

So from a developer experience perspective, this is what it looks like. So you have development teams who would be contributing to code over a source code repository and they would then either, I mean, a couple of options is usually you build your own CD pipeline first because you want to get your hands dirty. You want to get an understanding of what works and then you realize, ok. Do I really need to do all of this versus me building the code and actually writing the code and you have a look at App Runner and then you see, ok. All right, there's an option over there that App Runner allows me to just point it to my source code repository. I give it a configuration file which I can create through the CLI.

So App Runner has a CLI where if you go to a project and you do a init that would then create a file that would then be used by App Runner to deploy based on your configuration. So the deployment is then taken care of and then you get also a secure service URL, which is publicly exposed, which then you could use for other services to consume or for the public to use, which is what we're going to see in our first demo in a bit.

So that's kind of like how it works now. Why it helps really well with Go is because there is support for the native Go runtime from the get go in App Runner. And the idea is that you can just point App Runner to your application to your code base and then it would then package your application based on how you do the builds, you can provide the command for that and then afterwards get it deployed.

Now you have the deployment done, of course, your application runs. But like any other application, you want to save the state somewhere. And that's kind of like where databases come in. And the first question comes, I mean, there's so many databases in AWS, which database do I actually choose? And a lot of that depends on your use case, right? And it depends on the kind of access patterns that you're having. What's the kind of data that you're trying to store and how you actually going to access that data once it is stored.

So we call this like a series of databases which is purpose built and we have a lot of examples online. And you can probably ask me later after, after the talk and I can share some more resources on this one where you talk about how you can decide based on your access patterns, you can pitch and then plug in a certain database inside. So whether it's relational all the way down to key value document in memory and then time series and even graph, right? You have everything.

And the reason why this is important from a Go perspective is because there is support for all of this right in some form or the other. So for the native databases that are there on AWS, which is DynamoDB Timestream and Neptune, there is support from the AWS SDK for Go, right. And this is already support from the code you can just plug it in and then start calling the database which under the hood will use the APIs that are involved.

Then if you are using a service that supports open source database protocols, think for example, ElastiCache, for example, Redis or whether it is PostgreSQL or MySQL, you can use those open source libraries directly with these services on AWS also. So that's a great way for you to have your code run in a certain like probably on a on a on prem environment for example, and then afterwards recreate the database or maybe something like if you have PostgreSQL on prem and then after you created on on AWS and then use the same drivers and then pointed across then for the databases that are built like compatible to APIs, for example, like DocumentDB, which is built with an MongoDB API compatibility and Keyspaces, the open source MongoDB and Cassandra libraries also the clients basically they can connect also with the Keyspaces and DocumentDB databases on AWS.

So these are all a lot of the, the the libraries that are available and you can actually have a look at this, probably you're already using this in some form or the other. If not, I mean, there's a list over here and again, like I said, it's, it's all gonna be shared in that QR code that I shared earlier.

Alright, great. So I have my deployment kind of sorted out.

I have my database already there. I've selected something probably a NoSQL database, maybe DynamoDB. And then after that, the next step is ok, now I'm having a lot of data come in. I want to build data intensive applications, right? I want to do data processing.

And this is kind of like where from an analytics perspective, you have options like Kafka, right? So on, on Amazon, we have on AWS we have Amazon MSK and then we have Kinesis and also OpenSearch Service. And for things like these, you can actually use the libraries that are already available out there.

So whether you're building these data processing solutions, you have a couple of options. Usually what happens is as a developer, you would get started with the clients that are available, right? And the thing with the client is that when you use this, you get a lot of flexibility and control. But it also means you need to build a lot of logic internally to manage a lot of those streams and the events yourself, right? Whether it's retries, whether it's things like other things like like trying to ensure that all the events are being processed and then make sure that they don't get processed again. You need to manage all of this yourself, but at least you have that support for clients.

So this same thing is applied for Kinesis and OpenSearch. And in this case, you would want to actually start looking at. Ok, does it make sense for me? Ok, I'm using a managed service on AWS. I am using the client, but I'm writing a lot of the application code. Do I instead just want to consume the events that are coming through the services, right? And that's kind of like where you can look at using one of these services as an event source for Lambda.

And what do you actually do then in this case is that you focus on your Go application and then you put that probably in a Lambda function. And we see that in one of the demos where I just show ok, a Lambda function with Go. And the idea is that you have that event coming in and then Lambda gets triggered the Lambda processes the event and then afterwards it keeps on moving.

So you have these options available. And the idea is that based on your use case, based on how much of flexibility that you want or you just want to manage the code side of things, you can actually choose these solutions within this case.

Now, special mention also of DynamoDB Streams and we see this a lot, especially with folks who are trying to manage events that are coming through a NoSQL database, they would use something like DynamoDB. And in this case, you have a client based approach using the Go SDK, you can have a Lambda function that is triggered based on a stream that's coming from DynamoDB.

So for any, let's say, for example, any credit operation, you can actually have an event being fired which would then trigger a Lambda function in this case. And there is a, there is a blog post talking exactly about how you can actually use this very useful, especially if you are in the domain driven design kind of set up. You're having likes and things like that where you have events coming in regularly based on the transactions that are being done in your environment, right?

So quickly, I'm going to show the first demo and this is basically a recording, we'll show the other one live. And the idea is that this shows that this application Go application has been deployed in App Runner and this talks to DynamoDB and you can actually create this manually yourself or you can also use CDK, which we talk about towards the end of how to provision that infrastructure and just get it started. So it's, it's just a great way to just see how you can actually have a simple web application run on, on something like AWS.

So in this case, you are a URL shortener like any other shortener, you just give it a URL and then gives you a short code. In this case, what's happening is the container kicks in and then after executes the code gives a short code and then stores that in DynamoDB.

What I'm trying to do is I'm trying to create one more so that I can actually do a delete operation after this. And as far as the short code is there, I'm able to access that immediately and immediately after that, I just do a delete and then afterwards it just goes off.

Alright, cool. Right. So to, to actually deploy this, there's a couple of things that you can do from a code perspective. But before that, let me just quickly jump into the, into the code.

Alright, so everyone can see this. Alright, cool. So this is a standard Go application and in this case, what we have done is we have a router that's, that's been set up that actually then is mapped to the service which we have on top, which we saw in the demo quickly. All we're doing is we're creating the short code, we have basic cloud operations in this case. And also we have things like the database code over here which uses the SDK and allows us to then talk to DynamoDB, right?

And what we do is we have the basic stuff like initialization where we have the table name and we talk a bit about the initialization in best practices. Towards the end, we take any, we take the short code that's generated. And then after we save that in, we save the URL and then generate the short code back. This is basically how you would get started with just creating accessing attributes on DynamoDB. And then afterwards saving those records on DynamoDB, the same for when you actually doing a fetch, I mean this directly.

So a pretty straightforward approach when you're using the SDK and the documentation is pretty helpful. It gives you a lot of the examples of get to get started immediately.

And that's about it. Oh, oh, ok. That's fine. Thanks. Right. So that's, that's a basic application that runs on Go at least on App Runner. And just to just to show you how you can actually deploy this is and I'm going to look at this because it, it's a bit smaller over there is that you want to make sure that you are able to select the source code repository.

And from that, you can go to GitHub, you can select one of your sources where you actually keeping the code. So with GitHub, you select your repository, you can select the branch that needs to be deployed. And then immediately after that, you can decide how the deployment needs to be done.

So by default, in this case, what's happening is that every time a code change is done from the from a branch, I mean a commit is merged, for example, that branch then triggers the build and then after that gets deployed.

Now, in a, like I mentioned, you have the configuration which you can generate through CDK or you also have the option to create that through the UI directly, right? So all you need to do is select the runtime over there, select how the build is being made. So a standard build main dot go, like for a small application works. And then what's the entry point for the application? Right? In this case, it would be a main function and then what's the point that it would listen on and then just deploy that directly with the configuration.

So I just mentioned, ok, I need just this much amount of CPU this much amount of RAM and that's about it. Everything else gets set up, I can also add on an instance role. So that allows me to actually talk to AWS services, which in this case, it's talking to a DynamoDB database.

Cool. So that's with App Runner and that's basically how we actually could get started with Go on AWS. You have a kind of managing the whole deployment side of things. I have DynamoDB, I'm using the SDK, which is pretty awesome.

Now, what can we actually do much more with all of this? Right? So let's start a bit more on serverless and we're gonna see what you can actually do with Lambda in general.

One of the things is that since Go compiles to a native binary, the Lambda support for Go is not via language runtime like Node or Java, it's there right now, but it's actually getting deprecated the runtime part. So we actually have a Lambda an option called custom runtime. This allows you to have a binary and then you can hook into the custom runtime API for Lambda and then afterwards manage the lifecycle yourself.

In this case, for Go, what actually happens is that you create the binary and then afterwards you just say, ok fine. I'm going to use the custom runtime directly. The advantage you get is that everything becomes much smaller and from a deployment perspective.

So from a support perspective, all you need to do is you need to just include the Lambda library, the Lambda library support that we're going to talk about in a bit. And the idea is that this gives you a way to actually get started quickly with the application that you have without worrying too much about runtime anymore. So you can manage that part yourself.

So this actually happened recently. So a year two was an approach. This year two was actually as a runtime was provided last year. And just recently, a few weeks ago, we have provided 2023. So Amazon Linux 2023 as a runtime option is available on Lambda. And this works for any language that you can compile to a native binary, right? So we have this in Rust, we have this and Go.

And if you are still in the Java space, if you heard of Quarkus and Micronaut, right, you have JVM as an option. You actually have that as an option where you can, you can use a clean image like in this case, which we're going to talk about in a bit. We're gonna use a clean image and then have a build and then use that the base image, the clean OS image and then have the build the final container image which can then deploy on AWS Lambda.

So there are a couple of options that you can do with Lambda. So the idea is that you can actually take a Go function and then package into a zip file and then deploy it. But the thing is with a Docker image, it makes it easier for you to then test it out on different container orchestration services.

Sometimes what may happen is you may have an application that would make more sense for a Lambda function because of spiky workloads, right? Because you just want to run it every time a certain request comes in with the rest of the time it doesn't need to actually trigger that's kind of where you can build the image.

And now in this, you have two options. The first one is you can use the 2022 or the year 2023 base image. So all I need to do in my Dockerfile, I have a multi stage build set up. I start with how to build the application. I write those commands inside. And then immediately after that, in the second stage, I build the I provide the base image for a 2022 or a 2023. And then afterwards say, ok, how do I actually execute this application from Docker?

Now, if you want to use a non base image, like for example, Alpine, for example, right? For other reason, maybe you have a certain base image that you have created, you can also do that. The only thing is that the second stage actually will change, you actually refer to that base image and then afterwards build that final image in this case.

all right, cool. so that's all about deployments, right? everything's fine. in this case, let's talk about a i apps, right? it's it's the in thing right now, everyone wants to have more smarter apps these days. and the thing is that it's changing and a lot of times when you go with a i apps, the first language that comes in which i haven't still spoken about is python, right?

but does that mean that go developers like us are left out of the cold? no, they're not, right. so we have options you can do and one of the ways you can do that is with amazon bedrock, right?

so amazon bedrock is a service that we launched recently and we've made a lot of announcements in the last few months about this. this allows you to plug into foundational language models, right? the idea is that you just get an api end point, you can then interact with that api end point using code and we're going to show a bit of that. i think it's a pretty cool demo in that sense.

and the way you get started is um you, you use the sdk for example, and then afterwards you provide, you include the sdk runtime for bedrock and bedrock runtime packages. and the bedrock runtime package is what you will use to interact and invoke the a p is directly on bedrock.

now there is a getting started guide and that's my cos speaker who has actually created that he's actually also created a blog post on this really comprehensive, something that you can actually use to get started also as part of building a gen application.

one of the things that we see is that ok, you are calling an a p that's great. but you want to actually build a generative application that in production. does a lot more things like think about history, for example, think about things like ok, keeps the context so that it can then continue helping you throughout the conversation or throughout the interaction that you're doing.

and this is where l chain comes in. again, l chain, by default, it was built for python, but there are ports available. and l chain go is one of those options. and the idea is that with lang chain, it gives you a plug platform of sorts which allows you to then add support for things like different components that you need, right?

you can actually have an lm component inside. so in this case, you can have like ok, i'm talking to bedrock right now and then probably you have your own maybe foundational model hosted somewhere. for example, maybe you're having glama, for example, from meta hosted somewhere else, you can have all of that plugged into l chain at the same time.

what you can also do is if you want other components like history, for example, right? that's part of the conversation that i'm having with the foundation model, i want to preserve the history so that every time i go back i can do the i can, you know, continue with the context that are already there, right?

and that's very common in a lot of the applications that you see these days. the thing is with this, what usually happens is with an l, you need to understand that when you actually talk to it, it takes in a lot of the context that you already have based on the number of tokens that it can support, right? the number of characters that you can take in. and if you want to have more tokens supported in, you need to have a place to actually store that and keep it in the same way the element when it sends back data to you, you actually want to make sure that it's able to send back data continuously in a form.

and we're gonna see that in the demo from a streaming perspective, what it looks like, right?

so quickly on to the demo, we are gonna have some fun with claude this time around and um i am and this time i'm going to plug into my environment over here.

now, what i've done at least in this application is that this is a, this is a good application deployed in lambda and this is talking to bedrock. and the way we are going to invoke the lambda function is lambda. also has an option where you can get a function you are on top of the lambda function directly versus having an epa gateway end point pointing to it. it's a great way to test lambda functions, especially if you want to like just run something quickly in this case.

all right, cool. well, let's see, so quickly do a build. so as part of my build in this case, what i'm doing is so in a, in the previous, when we used the app run directly, in this case, we have an option for sc i'm actually jumping the gun on this one. but the idea is that we're going to be talking about infrastructure as code later down the in the next sort of.

and sc allows you to build service applications faster. it sits on top of cloud formation and it allows you to just create a template which you can then use. in my case, what i'm doing is i have a template already over here that creates or provisions the resources. in this case, i have like a service function, lambda function for chat. and uh i have a couple of policies that are there in place. and also it talks to a dynamo db table.

we use the dynamo db table to store the, the history of the conversation and then we also have the outputs. so we'll see that in a bit once it comes up. ok? cool.

all right. awesome. so, welcome to the service a chat. i think it's the next big app on, on generative a available. and this talks to one of the foundation models, clawed and clawed in this case by anthropic. and what i'm gonna do is i'm gonna first introduce myself. hello, i'm fuzz. what's your name? right.

so that's claw and i'm from dubai now this is, this is the basic interaction that you start with, with an l. and now the idea is that i wanted to remember what actually i asked earlier, right? and that's kind of like where dynamo db comes in along with the plug in for lang chain for the history side of things. and i just ask what's my name again? yeah.

so i told it first. so i actually picked that up based on the quantity that i have from before. so i'm just going to quickly ask you another question. um so, excuse my grammar on that one. any appointment to resources, i guess.

i mean, the way we actually search for things these days has changed a lot. it's, it's all we actually becoming much nicer with our queries. um sorry. yeah, let's try this.

all right. oops. yeah, that always happens on a demo does work, especially when it's 10 minutes before the talk. but yeah, every demo has its own way during a talk itself.

all right, cool. so, uh we'll just switch back to the actual code and i'll um oh, is that there's a top of my? ok. and that's why we have a backup in this case. so, yeah. ok. yeah. ok.

all right, cool. so that's the streaming part, right? that's basically why i wanted to show that demo. um and i think the wi fi kind of kicked in um a bit differently. and the idea is that with streaming, what you need to do is you need to constantly keep listening to how the llm is actually responding back, right?

so that's kind of like where l chain has a component for that uh from a code perspective, this is basically what it looks like. you can reach out to me later after the talk and we can talk a bit more about the code if needed. but it kind of gives you an idea of how you can actually get started quickly uh with uh with the generative a i on on go, right?

so in this case, what we have done is we have l and abhishek, my cos speaker, he's actually created um an extra in for, for using chat in l go. um so that's kind of where and he's added that already, it's on a report on, on github. so we'll be sharing that in a couple of days.

the idea is that again, we have the dynamo db table, we actually have the chat history side of things over here. this is basically where you're creating a conversation buffer that allows you to store that context with l chain and then also on dynamo db in this case, right? and then after we invoke that session across so a pretty cool way.

and i think there's a lot that's happening in the generative a space that's going to continue actually evolving in this case. so let's of the back and uh when we head to the next one, right?

so we saw the whole idea with deployments, we saw the whole idea of generative e i. and the thing is now from a too link perspective, how can actually you be productive on aws with go, right.

and one of the things again, going down the path of a i in general is you want to have a coding assistant that helps you in understanding how you can get started. i mean, if you're looking at code snippets, for example, you could actually use code whisper is one of those examples. and there's a lot that's been happening this year in terms of, you know, support for code whisper support for having snippets of code coming in based on context that you're having support generation.

as an example, this allows you to have a toolkit through the aws toolkit in an id. you can actually invoke code whisperer and then afterwards get certain recommendations back on what kind of code you want to run based on prompts, right? and that's usually comments that you write in your code.

there is a first class support for python for java, for javascript for go is there but to a certain level. so security scanning is currently not available for go, which is available for the other languages.

the idea is that you can actually have code in your id and then afterwards have it have code scan that code and then suggest to you, ok? this code probably is not, is not secure enough. and these are some options that you can actually leverage instead in this case.

so the way it works is you write the code as a developer and then afterwards you get back real time code suggestions. and then afterwards we also have a reference tracker and we can also do security scanning, not for good, but for other languages also

so quickly from a recorded demo perspective is that i have a starting application for, i want to upload a file to an s3 bucket, right? and in this case, what it does is that it starts suggesting, ok, here's a couple of imports that i think that you should be using. of course, it's the choice is yours as a developer, so you can pick what you need and then afterwards you could just discuss the rest.

so in my case, if i'm especially someone who's just starting in and go from another place, i just want to know how i can use the different imports. i just want to provide it a starting point saying, ok, i'm going to use one of the sdk. so here's basically what i want to use.

so i'm using aws version two and then immediately after that, based on the prompt that i mentioned on top, it starts listing on things like services s3. and then i want to also use the three manager because i want to interact with the s3 bucket and probably anything else.

now, i don't want any other import that's needed. so i just

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值