Cloud wins AI infrastructure debate by default

It is time has celebrate THE amazing women leading THE path In AI! Designate your inspiring leaders For VentureBeat Women In AI Price Today Before June 18. Learn More

As artificial intelligence (AI) takes THE world by storm, A old debate East turn back on: should companies self-hosted AI tools Or rely on on THE cloud? For example, Side Premkumar, founder of AI to start up Lytix, recently sharing her analysis self-hosting A open source AI model, suggesting he could be cheaper that using Amazon the Web Services (AWS). 

That of Premkumar Blog job, retailer A cost comparison between running THE Llama-3 8B model on AWS And self-hosting THE material, has sets off A alive discussion reminding of THE early days of cloud computing, When companies weighed THE benefits And the inconvenients of on the site Infrastructure against THE emerging cloud model.

That of Premkumar analysis suggested that while AWS could offer A price of $1 by million tokens, self-hosting could potentially reduce This cost has just $0.01 by million tokens, though with A longer break even period of around 5.5 years. However, This cost comparison overlooks A crucial postman: THE total cost of ownership (CTP). It is A debate We have seen Before during "THE Great Cloud Wars, » Or THE cloud computing model emerged victorious despite initial skepticism.

THE question remains: will on the site AI Infrastructure TO DO A to come back, Or will THE cloud dominate once again?

V.B. Transform 2024 Registration East Open

Join business leaders In San Francis Since July 9 has 11 For OUR flagship AI event. Connect with peers, explore THE opportunities And challenges of Generative AI, And learn how has to integrate AI apps In your industry. Register Now

A closer look has That of Premkumar analysis 

That of Premkumar Blog job provides A detailed breakdown of THE costs partner with self-hosting THE Llama-3 8B model. He compared THE cost of running THE model on AWS g4dn.16xlarge example, which features 4 Nvidia You're here T4 GPUs, 192 GB of memory, And 48 virtual processors, has THE cost of self-hosting A similar material configuration.

According to has That of Premkumar calculations, running THE model on AWS would be cost approximately $2,816.64 by month, supposing complete use. With THE model able has process around 157 million tokens by month, This translated has A cost of $17.93 by million tokens.

In contrast, Premkumar estimates that self-hosting THE material would be require A in the front investment of around $3,800 For 4 Nvidia You're here T4 GPU And A additional $1,000 For THE rest of THE system. Factoring In energy costs of approximately $100 by month, THE self-hosted solution could process THE even 157 million tokens has A cost of just $0.000000636637738 by token, Or $0.01 by million tokens.

While This can seem as A convincing argument For self-accommodation, It is important has note that That of Premkumar analysis Assumed 100% use of THE material, which East rarely THE case In real world scenarios. In addition, THE self-hosted approach would be require A break even period of around 5.5 years has to recover THE initial material investment, during which time more recent, more powerful material can to have Already emerged.

A familiar debate 

In THE early days of cloud computing, supporters of on the site Infrastructure do a lot keen And convincing arguments. They quoted THE security And control of keeping data internally, THE potential cost savings of invest In their own material, better performance For latency sensitive Tasks, THE flexibility of personalization, And THE desire has avoid supplier lock.

Today, defenders of on the site AI Infrastructure are singing A similar adjust. They argue that For very regulated Industries as health care And finance, THE compliance And control of on the site East preferable. They believe invest In new, specialized AI material can be more profitable In THE long run that in progress cloud costs, especially For data-rich workloads. They to quote THE performance benefits For latency sensitive AI Tasks, THE flexibility has Personalize Infrastructure has their exact...

Cloud wins AI infrastructure debate by default

It is time has celebrate THE amazing women leading THE path In AI! Designate your inspiring leaders For VentureBeat Women In AI Price Today Before June 18. Learn More

As artificial intelligence (AI) takes THE world by storm, A old debate East turn back on: should companies self-hosted AI tools Or rely on on THE cloud? For example, Side Premkumar, founder of AI to start up Lytix, recently sharing her analysis self-hosting A open source AI model, suggesting he could be cheaper that using Amazon the Web Services (AWS). 

That of Premkumar Blog job, retailer A cost comparison between running THE Llama-3 8B model on AWS And self-hosting THE material, has sets off A alive discussion reminding of THE early days of cloud computing, When companies weighed THE benefits And the inconvenients of on the site Infrastructure against THE emerging cloud model.

That of Premkumar analysis suggested that while AWS could offer A price of $1 by million tokens, self-hosting could potentially reduce This cost has just $0.01 by million tokens, though with A longer break even period of around 5.5 years. However, This cost comparison overlooks A crucial postman: THE total cost of ownership (CTP). It is A debate We have seen Before during "THE Great Cloud Wars, » Or THE cloud computing model emerged victorious despite initial skepticism.

THE question remains: will on the site AI Infrastructure TO DO A to come back, Or will THE cloud dominate once again?

V.B. Transform 2024 Registration East Open

Join business leaders In San Francis Since July 9 has 11 For OUR flagship AI event. Connect with peers, explore THE opportunities And challenges of Generative AI, And learn how has to integrate AI apps In your industry. Register Now

A closer look has That of Premkumar analysis 

That of Premkumar Blog job provides A detailed breakdown of THE costs partner with self-hosting THE Llama-3 8B model. He compared THE cost of running THE model on AWS g4dn.16xlarge example, which features 4 Nvidia You're here T4 GPUs, 192 GB of memory, And 48 virtual processors, has THE cost of self-hosting A similar material configuration.

According to has That of Premkumar calculations, running THE model on AWS would be cost approximately $2,816.64 by month, supposing complete use. With THE model able has process around 157 million tokens by month, This translated has A cost of $17.93 by million tokens.

In contrast, Premkumar estimates that self-hosting THE material would be require A in the front investment of around $3,800 For 4 Nvidia You're here T4 GPU And A additional $1,000 For THE rest of THE system. Factoring In energy costs of approximately $100 by month, THE self-hosted solution could process THE even 157 million tokens has A cost of just $0.000000636637738 by token, Or $0.01 by million tokens.

While This can seem as A convincing argument For self-accommodation, It is important has note that That of Premkumar analysis Assumed 100% use of THE material, which East rarely THE case In real world scenarios. In addition, THE self-hosted approach would be require A break even period of around 5.5 years has to recover THE initial material investment, during which time more recent, more powerful material can to have Already emerged.

A familiar debate 

In THE early days of cloud computing, supporters of on the site Infrastructure do a lot keen And convincing arguments. They quoted THE security And control of keeping data internally, THE potential cost savings of invest In their own material, better performance For latency sensitive Tasks, THE flexibility of personalization, And THE desire has avoid supplier lock.

Today, defenders of on the site AI Infrastructure are singing A similar adjust. They argue that For very regulated Industries as health care And finance, THE compliance And control of on the site East preferable. They believe invest In new, specialized AI material can be more profitable In THE long run that in progress cloud costs, especially For data-rich workloads. They to quote THE performance benefits For latency sensitive AI Tasks, THE flexibility has Personalize Infrastructure has their exact...

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow