You are reading the article Defi Explodes On Cardano – And $Sponge Forecast Sees A 100X updated in December 2023 on the website Daihoichemgio.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 Defi Explodes On Cardano – And $Sponge Forecast Sees A 100X
Cardano (ADA) prices have started to slide deeper into a negative trend despite its fundamental network recording an increase in executed transactions. This has sent many investors into confusion, especially those who were backing the famed crypto. While many have started deciphering what the latest development means for Cardano’s price moving forward, others have started considering another token making headlines in the meme market SpongeBob Token ($SPONGE).Cardano DeFi Activity Explodes – Time to Buy $SPONGE?
For those who don’t know, Cardano is a next-generation evolution of the Ethereum idea, with a focus on sustainability, scalability, and interoperability. By using the proof-of-stake consensus mechanism called Ouroboros, it is more energy-efficient and secure than the likes of Bitcoin.
Cardano’s fundamental network has recorded quite a surge when it comes to executed transactions, despite the negative sentiment surrounding the market. The majority of these transactions are in Cardano DeFi apps due to two new meme tokens.
Cardano’s strong fundamentals, along with bullish on-chain data, are expected to work as catalysts for a market reversal. However, investors need to review the signs of a trend reversal and follow market dynamics to stay in the game when it comes to Cardano. On the other hand, investors might be interested in the SpongeBob Token, which has all the hallmarks of a meme coin ready to explode.Revolutionizing the Meme Coin Market with SpongeBob Token
Almost every 1990s kid knows who SpongeBob Square Pants is. The Sponge-basic character that lived in the fictional underwater town of Bikini Bottom has entertained children of all ages for more than a decade. But what happens when a meme coin tries to capture its popularity in 2023?
Known as the “meme coin equivalent of the Krabby Patty”, SpongeBob Token ($SPONGE) is a new meme coin on the market that is based on the famous Nickelodeon cartoon character. The project which is meant to be fun and appealing to nostalgic crypto enthusiasts, has started recording massive gains.
The $SPONGE token has already achieved a market cap of more than $12.5 million and is currently in the midst of a price surge. At a time when meme coins are struggling, whales have started picking up $SPONGE which has increased its value. Since meme coins have a tendency to explode in price, this is the right time to invest in SpongeBob Token.SpongeBob Token Price Performance – Potential for 100x Gains?
SpongeBob Token was launched back on May 4th, 2023 with a value of $0.000073. Less than a day later, $SPONGE reached $0.00004584 and reached a staggering $0.000616 by May 5th, 2023. This means that the token experienced a massive growth of 1400%.
The SpongeBob Token team decided to take a major decision that would differentiate them from most other meme projects. Instead of going for a presale like other projects, $SPONGE decided to be listed directly on Uniswap without engaging much in social media hype.
This approach worked well in spreading the news like wildfire. As a result, an increasing number of investors started flocking to the project, increasing its value exponentially.SpongeBob Token – Community and Listings
By taking a look at SpongeBob Token’s fundamentals, it is clear that the project is in good hands. The project has amassed an impressive 40,000 Twitter followers with more than 20,000 Telegram members. It also has a Discord server which serves as a vibrant space for its community.
With more than 3000 members on its Discord channel, SpongeBob Token has decided that all the members are eligible for airdrops. This sort of organic growth is what most businesses are after. Analysts expect those numbers to double as community engagement increases.
At the time of writing, $SPONGE is listed on major exchanges such as Bitget, chúng tôi BTCEX, Toobit, CoinW, LBank, Poloniex and MEXC Global. Several top exchanges have also kept $SPONGE on their radar.$SPONGE Making Headlines in All Major News Outlets
Elon Musk also tweeted about SpongeBob Token, which has attracted a lot of mainstream attention. SpongeBob Token’s team hopes that some of the entrepreneur’s love for meme coins may rub off on their project too.SpongeBob Token – Airdrop Information
There are three main criteria to receive airdropped tokens.
The level of engagement on Discord (highly engaged)
The amount of $SPONGE purchased by the holder.
The amount of $SPONGE traded on Uniswap.
The scores are reset every week with the cut-off date for eligibility yet to be announced.Conclusion
Cardano has been in the market for a long time and has confident investors who are ready to see through these tumultuous times. However, not every investor has the same degree of risk appetite and planning.
You're reading Defi Explodes On Cardano – And $Sponge Forecast Sees A 100X
As part of the Cardano ecosystem, ADALend builds a scalable and decentralized lending protocol, which the Cardano community will regulate.
A new generation of flexible financial services for digital asset markets will be powered by the ADALend protocol, which will provide a foundation for speedy loan approval, automated collateralization, trustless custody, and liquidity in the digital asset markets.
Cardano (ADA) is a blockchain platform with various capabilities that will power the ADALend protocol. To produce a scalable, transparent, and resilient cryptocurrency, Cardano (ADA) uses cutting-edge technology. The fact that it is a publicly accessible blockchain network makes it one of the many well-known cryptocurrencies that have grown and developed rapidly in recent years. With Input Output Hong Kong (IOHK), Charles Hoskinson laid the framework in 2023 for what is unquestionably the most vital third-generation blockchain asset now available on the market.
A well-organized team is in place at Cardano (ADA), and the company has a clearly defined plan for the future development of the company’s projects. With its colossal scalability potential and the ability to construct decentralized applications, the blockchain is a robust technology that satisfies future demands in many fields.
ADALend heats the DeFi Space
ADALend chose Cardano as the primary blockchain that will power the DeFi system, unlike Ethereum based AAVEbecause Cardano is significantly less expensive to send, receive, and initiate contracts. In 2023, the price of Ethereum gas surged, causing dissatisfied users to realize that fees were a serious concern for everyone who used the AAVE protocol at the time.
It has been reported that the average transaction cost in 2023 and 2023 went as high as 80 USD in some circumstances (BitInfoCharts). Cardano fees remain low compared to other cryptocurrencies, primarily due to the dual-layer design of the network, which isolates calculations from settlements.
Because it still employs a Proof-of-Work (PoW) blockchain, the Ethereum network is still inefficient compared to the Cardano blockchain, which uses a Proof-of-Stake (PoS) system, which follows the same fees principles as the Ethereum network. Compared to the Ethereum blockchain, the Cardano blockchain enables the processing of a significantly greater number of transactions. The Cardano blockchain operates at a considerably faster rate. To make auditing as simple as possible, the Cardano codebase is being created in Haskell, a widely-used programming language chosen explicitly for this purpose.
A particularly specialized programming language, Solidity, was created by Ethereum developers and is only written by a small number of programmers, let alone subjected to rigorous peer review. The greater the number of engineers who can examine and audit code, the more safe and impenetrable the system will appear to be. To put it another way, the Cardano developers want the blockchain to be as free of code flaws as possible to prevent future security risks from occurring.
ADALend will leverage the oracles Chainlink and Ergo to provide a more secure and efficient experience for clients. Using Ergo’s oracle pools is more efficient and configurable than Chainlink’s oracle architecture, which relies on many single oracle data sources. AAVE solely makes use of Chainlink oracles.
Cardano makes use of the Ouroboros consensus algorithm, which is a Proof-of-Stake consensus system. Due to the ability of ADA holders to delegate their assets to secure the network, this closed-loop approach maximizes the efficiency with which network resources are utilized. The outcome is a significantly less resource-intensive system than Ethereum, primarily powered by miners who consume a lot of energy to protect the network, consuming vast quantities of electricity in the process.
To mark our 150th year, we’re revisiting the Popular Science stories (both hits and misses) that helped define scientific progress, understanding, and innovation—with an added hint of modern context. Explore the entire From the Archives series and check out all our anniversary coverage here.
Social psychologist Frank Rosenblatt had such a passion for brain mechanics that he built a computer model fashioned after a human brain’s neural network, and trained it to recognize simple patterns. He called his IBM 704-based model Perceptron. A New York Times headline called it an “Embryo of Computer Designed to Read and Grow Wiser.” Popular Science called Perceptrons “Machines that learn.” At the time, Rosenblatt claimed “it would be possible to build brains that could reproduce themselves on an assembly line and which would be conscious of their existence.” The year was 1958.
Many assailed Rosenblatt’s approach to artificial intelligence as being computationally impractical and hopelessly simplistic. A critical 1969 book by Turing Award winner Marvin Minsky marked the onset of a period dubbed the AI winter, when little funding was devoted to such research—a short revival in the early ‘80s notwithstanding.
In a 1989 Popular Science piece, “Brain-Style Computers,” science and medical writer Naomi Freundlich was among the first journalists to anticipate the thaw of that long winter, which lingered into the ‘90s. Even before Geoffrey Hinton, considered one of the founders of modern deep learning techniques, published his seminal 1992 explainer in Scientific American, Freundlich’s reporting offered one of the most comprehensive insights into what was about to unfold in AI in the next two decades.
“The resurgence of more-sophisticated neural networks,” wrote Freundlich, “was largely due to the availability of low-cost memory, greater computer power, and more-sophisticated learning laws.” Of course, the missing ingredient in 1989 was data—the vast troves of information, labeled and unlabeled, that today’s deep-learning neural networks inhale to train themselves. It was the rapid expansion of the internet, starting in the late 1990s, that made big data possible and, coupled with the other ingredients noted by Freundlich, unleashed AI—nearly half a century after Rosenblatt’s Perceptron debut.“Brain-style computers” (Naomi J. Freundlich, February 1989)
I walked into the semi-circular lecture hall at Columbia University and searched for a seat within the crowded tiered gallery. An excited buzz petered off to a few coughs and rustling paper as a young man wearing circular wire-rimmed glasses walked toward the lectern carrying a portable stereo tape player under his arm. Dressed in a tweed jacket and corduroys, he looked like an Ivy League student about to play us some of his favorite rock tunes. But instead, when he pushed the “on” button, a string of garbled baby talk-more specifically, baby-computer talk-came flooding out. At first unintelligible, really just bursts of sounds, the child-robot voice repeated the string over and over until it became ten distinct words.
“This is a recording of a computer that taught itself to pronounce English text overnight,” said Terrence Sejnowski, a biophysicist at Johns Hopkins University. A jubilant crowd broke into animated applause. Sejnowski had just demonstrated a “learning” computer, one of the first of a radically new kind of artificial-intelligence machine.
Called neural networks, these computers are loosely modeled after the interconnected web of neurons, or nerve cells, in the brain. They represent a dramatic change in the way scientists are thinking about artificial intelligence- a leaning toward a more literal interpretation of how the brain functions. The reason: Although some of today’s computers are extremely powerful processors that can crunch numbers at phenomenal speeds, they fail at tasks a child does with ease-recognizing faces, learning to speak and walk, or reading printed text. According to one expert, the visual system of one human being can do more image processing than all the supercomputers in the world put together. These kinds of tasks require an enormous number of rules and instructions embodying every possible variable. Neural networks do not require this kind of programming, but rather, like humans, they seem to learn by experience.
For the military, this means target-recognition systems, self-navigating tanks, and even smart missiles that chase targets. For the business world, neural networks promise handwriting-and face-recognition systems and computer loan officers and bond traders. And for the manufacturing sector, quality-control vision systems and robot control are just two goals.
Interest in neural networks has grown exponentially. A recent meeting in San Diego brought 2,000 participants. More than 100 companies are working on neural networks, including several small start-ups that have begun marketing neural-network software and peripherals. Some computer giants, such as IBM, AT&T, Texas Instruments, Nippon Electric Co., and Fujitsu, are also going full ahead with research. And the Defense Advanced Research Projects Agency (or DARPA) released a study last year that recommended neural-network funding of $400 million over eight years. It would be one of the largest programs ever undertaken by the agency.
Ever since the early days of computer science, the brain has been a model for emerging machines. But compared with the brain, today’s computers are little more than glorified calculators. The reason: A computer has a single processor operating on programmed instructions. Each task is divided into many tiny steps that are performed quickly, one at a time. This pipeline approach leaves computers vulnerable to a condition commonly found on California freeways: One stalled car-one unsolvable step-can back up traffic indefinitely. The brain, in contrast, is made up of billions of neurons, or nerve cells, each connected to thousands of others. A specific task enlists the activity of whole fields of neurons; the communication pathways among them lead to solutions.
The excitement over neural networks is not new and neither are the “brain makers.” Warren S. McCulloch, a psychiatrist at the Universities of Illinois and Chicago, and his student Walter H. Pitts began studying neurons as logic devices in the early 1940s. They wrote an article outlining how neurons communicate with each other electrochemically: A neuron receives inputs from surrounding cells. If the sum of the inputs is positive and above a certain preset threshold, the neuron will fire. Suppose, for example, that a neuron has a threshold of two and has two connections, A and B. The neuron will be on only if both A and B are on. This is called a logical “and” operation. Another logic operation called the “inclusive or” is achieved by setting the threshold at one: If either A or B is on, the neuron is on. If both A and B are on, then the neuron is also on.
In 1958 Cornell University psychologist Frank Rosenblatt used hundreds of these artificial “neurons” to develop a two-layer pattern-learning network called the perceptron. The key to Rosenblatt’s system was that it learned. In the brain, learning occurs predominantly by modification of the connections between neurons. Simply put, if two neurons are active at once and they’re connected, then the synapses (connections) between them will get stronger. This learning rule is called Hebb’s rule and was the basis for learning in the perceptron. Using Hebb’s rule, the network appears to “learn by experience” because connections that are used often are reinforced. The electronic analog of a synapse is a resistor and in the perceptron resistors controlled the amount of current that flowed between transistor circuits.
Other simple networks were also built at this time. Bernard Widrow, an electrical engineer at Stanford University, developed a machine called Adaline (for adaptive linear neurons) that could translate speech, play blackjack, and predict weather for the San Francisco area better than any weatherman. The neural network field was an active one until 1969.
In that year the Massachusetts Institute of Technology’s Marvin Minsky and Seymour Papert—major forces in the rule-based AI field—wrote a book called Perceptrons that attacked the perceptron design as being “too simple to be serious.” The main problem: The perceptron was a two-layer system-input led directly into output-and learning was limited. ”What Rosenblatt and others wanted to do basically was to solve difficult problems with a knee-jerk reflex,” says Sejnowski.
The other problem was that perceptrons were limited in the logic operations they could execute, and therefore they could only solve clearly definable problems–deciding between an L and a T for example. The reason: Perceptrons could not handle the third logic operation called the “exclusive or.” This operation requires that the logic unit turn on if either A or B is on, but not if they both are.
According to Tom Schwartz, a neural-network consultant in Mountain View, Calif., technology constraints limited the success of perceptrons. “The idea of a multilayer perceptron was proposed by Rosenblatt, but without a good multilayer learning law you were limited in what you could do with neural nets.” Minsky’s book, combined with the perceptron’s failure to achieve developers’ expectations, squelched the neural-network boom. Computer scientists charged ahead with traditional artificial intelligence, such as expert systems.Underground connections
During the “dark ages” as some call the 15 years between the publication of Minsky’s Perceptrons and the recent revival of neural networks, some die-hard “connectionists” –neural-network adherent–prevailed. One of them was physicist John J. Hopfield, who splits his time between the California Institute of Technology and AT&T Bell Laboratories. A paper he wrote in 1982 described mathematically how neurons could act collectively to process and store information, comparing a problem’s solution in a neural network with achieving the lowest energy state in physics. As an example, Hopfield demonstrated how a network could solve the “traveling salesman” problem- finding the shortest route through a group of cities a problem that had long eluded conventional computers. This paper is credited with reinvigorating the neural network field. “It took a lot of guts to publish that paper in 1982,” says Schwartz. “Hopfield should be known as the fellow who brought neural nets back from the dead.”
The resurgence of more-sophisticated neural networks was largely due to the availability of low-cost memory, greater computer power, and more-sophisticated learning laws. The most important of these learning laws is some- thing called back-propagation, illustrated dramatically by Sejnowski’s NetTalk, which I heard at Columbia.
With NetTalk and subsequent neural networks, a third layer, called the hidden layer, is added to the two-layer network. This hidden layer is analogous to the brain’s interneurons, which map out pathways between the sensory and motor neurons. NetTalk is a neural-network simulation with 300 processing units-representing neurons- and over 10,000 connections arranged in three layers. For the demonstration I heard, the initial training input was a 500-word text of a first-grader’s conversation. The output layer consisted of units that encoded the 55 possible phonemes-discreet speech sounds-in the English language. The output units can drive a digital speech synthesizer that produces sounds from a string of phonemes. When NetTalk saw the letter N (in the word “can” for example) it randomly (and erroneously) activated a set of hidden layer units that signaled the output “ah.” This output was then compared with a model: a correct letter-to-phoneme translation, to calculate the error mathematically. The learning rule, which is actually a mathematical formula, corrects this error by “apportioning the blame”-reducing the strengths of the connections between the hidden layer that corresponds to N and the output that corresponds to “ah.” “At the beginning of NetTalk all the connection strengths are random, so the output that the network produces is random,” says Sejnowski. “Very quickly as we change the weights to minimize error, the network starts picking up on the regular pattern. It distinguishes consonants and vowels, and can make finer distinctions according to particular ways of pronouncing individual letters.”
Trained on 1,000 words, within a week NetTalk developed a 20,000-word dictionary. “The important point is that the network was not only able to memorize the training words, but it generalized. It was able to predict new words it had never seen before,” says Sejnowski. “It’s similar to how humans would generalize while reading ‘Jabberwocky.’ “
Generalizing is an important goal for neural networks. To illustrate this, Hopfield described a munition identification problem he worked on two summers ago in Fort Monmouth, N.J. “Let’s say a battalion needs to identify an unexploded munition before it can be disarmed,” he says. “Unfortunately there are 50,000 different kinds of hardware it might be. A traditional computer would make the identification using a treelike decision process,” says Hopfield. ”The first decision could be based on the length of the munition.” But there’s one problem: “It turns out the munition’s nose is buried in the sand, and obviously a soldier can’t go out and measure how long it is. Although you’ve got lots of information, there are always going to be pieces that you are not allowed to get. As a result you can’t go through a treelike structure and make an identification.”
Hopfield sees this kind of problem as approachable from a neural-network point of view. “With a neural net you could know ten out of thirty pieces of information about the munition and get an answer.”
Besides generalizing, another important feature of neural networks is that they “degrade gracefully.” The human brain is in a constant state of degradation-one night spent drinking alcohol consumes thousands of brain cells. But because whole fields of neurons contribute to every task, the loss of a few is not noticeable. The same is true with neural networks. David Rumelhart, a psychologist and neural-network researcher at Stanford University, explains: “The behavior of the network is not determined by one little localized part, but in fact by the interactions of all the units in the network. If you delete one of the units, it’s not terribly important. Deleting one of the components in a conventional computer will typically bring computation to a halt.”Simulating networks
Although neural networks can be built from wires and transistors, according to Schwartz, “Ninety-nine percent of what people talk about in neural nets are really software simulations of neural nets run on conventional processors.” Simulating a neural network means mathematically defining the nodes (processors) and weights (adaptive coefficients) assigned to it. “The processing that each element does is determined by a mathematical formula that defines the element’s output signal as a function of whatever input signals have just arrived and the adaptive coefficients present in the local memory,” explains Robert Hecht-Nielsen, president of Hecht-Nielsen Neurocomputer Corp.
Some companies, such as Hecht- Nielsen Neurocomputer in San Diego, Synaptics Inc. in San Jose, Calif., and most recently Nippon Electric Co., are selling specially wired boards that link to conventional computers. The neural network is simulated on the board and then integrated via software to an IBM PC-type machine.
Several military contractors including Bendix Aerospace, TRW, and the University of Pennsylvania are also going ahead with neural networks for signal processing-training networks to identify enemy vehicles by their radar or sonar patterns, for example.
Still, there are some groups concentrating on neural network chips. At Bell Laboratories a group headed by solid-state physicist Larry Jackel constructed an experimental neural-net chip that has 75,000 transistors and an array of 54 simple processors connected by a network of resistors. The chip is about the size of a dime. Also developed at Bell Labs is a chip containing 14,400 artificial neurons made of light-sensitive amorphous silicon and deposited as a thin film on glass. When a slide is projected on the film several times, the image gets stored in the network. If the network is then shown just a small part of the image, it will reconstruct the original picture.
Finally, at Synaptics, CalTech’s Carver Mead is designing analog chips modeled after human retina and cochlea.Cheap imitation
”There are at least fifty different types of networks being explored in research or being developed for applications,” says Hecht-Nielsen. ”The differences are mainly in the learning laws implemented and the topology [detailed mapping] of the connections.” Most of these networks are called “feed-forward” networks-information is passed forward in the layered network from inputs to hidden units and finally outputs.
John Hopfield is not sure this is the best architecture for neural nets. “In neurobiology there is an immense amount of feedback. You have connections coming back through the layers or interconnections within the layers. That makes the system much more powerful from a computational point of view.”
That kind of criticism brings up the question of how closely neural networks need to model the brain. Fahlman says that neural-network researchers and neurobiologists are “loosely coupled.” “Neurobiologists can tell me that the right number of elements to think about is tens of billions. They can tell me that the right kind of interconnection is one thousand or ten thousand to each neuron. And they can tell me that there doesn’t seem to be a lot of flow backward through a neuron,” he says. But unfortunately, he adds, “they can’t provide information about exactly what’s going on in the synapse of the neuron.”
The cover of the February 1989 issue of Popular Science featured a deadly new fighter plane and news in glues.
Some text has been edited to match contemporary standards and style.
Decentralized finance (DeFi) has seen a surging interest unlike amid the chaos of 2023. Last year alone, the ecosystem saw a massive surge in terms of total locked-in value; from a few hundred million to more than $20 billion in a matter of months. This surge caught the eye of many investors but institutions still seemed hesitant towards diving into it.
Mark Cuban, billionaire investor and owner of the Dallas Mavericks recently published a blog centered around DeFi and its various use cases. The blog titled ‘The Brilliance of Yield Farming, Liquidity Providing, and Valuing Crypto Projects‘ analyzed different aspects and looked into potential around DeFi projects.
He put forward his take on two projects, MATIC and AAVE.
On Matic, Cuban stated:
‘They (Polygon) are a very simple business that is hard to execute. Their job is to provide tools that enable transactions using their Ethereum/Solidity smart contracts, built primarily by outside parties, to take place as quickly and inexpensively as possible while still being able to bring in more money than they spend.’
On AAVE, he expressed a similar positive sentiment towards the project. He said:
They can make a FORTUNE for their depositors and token holders because their overhead vs their revenue is miniscule. Automated Financial Market Makers are so much more capital and operationally efficient than similar traditional companies. Banks should be scared.
Overall DeFi industry:
Zooming out a bit, with regard to DeFi he added,
“Yield Farming via Staking and Liquidity Providing are a core feature of most, if not all Decentralized Finance (DeFi) projects. “
He then put forward a basic difference between centralized business models and decentralized ones. While for the former, one initially needs to raise capital, start a business, and then the capital follows. For latter,
“organizations don’t require near as much capital to start and operate. Rather than raising money in a traditional sense, they can sell tokens to raise capital, they can reward Liquidity Providers instead of having to raise liquidity for financial transactions, and much of the critical security is provided by Miners or Validators……”
While discussing the future potential of the DeFi industry, the Shark Tank star wasn’t shy to shed light on the current relationship with the US regulators. He noted that most, if not all DeFi projects, were based outside of the US.
“One place that these organizations are VERY DIFFERENT is that they are not based in the USA and they are not corporations. They are foundations. They are Decentralized in their governance,” he said.
Could this be a possible reason why DeFi projects remain without a “center”? He further added:
“This is not only because of the ethos of Decentralized Autonomous Organizations (DAOS), but also because of the ABSOLUTE STUPIDITY of our regulators forcing some of the most impactful and innovative entrepreneurs of this generation to foreign countries to run their businesses.”
Recently, Cuban had appeared in a lot of interviews, talking about DeFi. As part of his prediction, he had stated:
“…in 10-20 years we would see world-changing companies had been created in 2023 and 2023. Among those companies, it’s already a certainty that De-Fi and other crypto organizations will be at or near the top of the list.”
Cuban argued against the current restriction and skepticism faced by the entire crypto industry from the US authorities, which could take a heavy toll on the country’s economy. According to him, US should consider embracing and supporting crypto innovations such as DeFi. IF not, then
“we will lose the next great growth engine that this country needs.”
WASHINGTON — As federal agencies put their budgets under the microscope looking for items to trim, under-producing IT projects could land on the chopping block.
Federal CIOs are under pressure from the White House tech team to eliminate inefficient tech deployments, and either overhaul or abandon projects that are running over budget or behind schedule.
That means that IT firms looking to do business with the federal government are going to have to prove their case, according to Teresa Carlson, vice president of Microsoft’s (NASDAQ: MSFT) federal government division.
“I think in the federal [market] we have to prove our worth now,” Carlson said in a presentation here at Microsoft’s annual Worldwide Partner Conference. “Business cases are going to become extremely important.”
Greg Myers, the general manager of Microsoft’s federal civilian business, said that the company is pitching federal IT managers on cloud-based technologies that it argues are more agile and entail more flexible operating agreements and lower cost than enterprise infrastructure solutions from companies like Oracle (NASDAQ: ORCL) and VMware (NYSE: VMW)
Myers also described a growing tension in the government IT sector in response to the Obama administration’s emphasis on bringing more data and services online while still maintaining and strengthening the security of sensitive data. That friction is particularly acute in areas such as health care, where Microsoft and scores of other firms are vying for government contracts in areas such as electronic health records (EHR).
“You’re seeing this violent collision of being as open as they possibly can on the civilian side, especially in healthcare, and you’ve got, obviously, security [which] is paramount,” he said.
“The CIOs…are really struggling with how to serve both masters. It’s one thing if someone hacks a Google mail account or a Facebook account. It’s quite another if someone gets into your EHR,” he added.
By convening its partner conference in the nation’s capital, Microsoft executives have had a chance to hear about the government’s IT priorities directly from the officials making the decisions.
CEO Steve Ballmer has been making the rounds in Washington this week, meeting with the deputy secretaries at the president’s management council and sitting in on a CIO roundtable. Ballmer also met with senior officials at the Department of Veterans Affairs, and stopped in at Walter Reed Hospital to hand out Xboxes and Zunes to soldiers wounded in combat.
One of Ballmer’s messages to the CIOs was to shorten the development and procurement cycle, Carlson said. Within Microsoft, projects that drag on for more than a year are closely scrutinized, while none is generally allowed to languish for more than two and a half years. But in government IT, historically, it has not been uncommon for projects to crawl along for several years.
“They were blown away by this,” Carlson said. “One of the things the CIOs brought up that they’re struggling with is the culture shift.”
Federal CIO Vivek Kundra has been leading an effort to evaluate under-performing or over-budget IT projects across the agencies, while also trying to bring more transparency into how much money the government is spending on various projects. Just yesterday, Kundra debuted the revamped IT dashboard, an online tool that tracks federal technology projects throughout the agencies and departments.
“They really are paying close attention to these large projects that aren’t working,” Carlson said. “They’ve already canceled one with the VA. There’s going to be more of these.”
“Cost really is king,” said Kris Teutsch head of Microsoft’s national security group. “That’s driving the behavior of procurement.”
Kenneth Corbin is an associate editor at chúng tôi the news service of chúng tôi the network for technology professionals.
Investing inThe Force Behind Bitgert Growth
Bitgert’s fast-growing ecosystem is one of the biggest reasonsLitecoin and Bitcoin Developments
It is notable that Litecoin is currently working on the implementation of the Lightning Network Protocol along with different applications for smart contracts and privacy. Aside from that, Litecoin’s numerous partnerships and integrations as a payment method are the reason behind the current Litecoin marketcap growth. Bitcoin is also focused on expanding its current network through additional partnerships with Capitalist Ventures. Among the current partners include; BitcoinCash, Blockchair, and chúng tôi The Bitcoin future Marketcap is expected to increase rapidly with the growing number of partners joining Bitcoin.
Investing in Bitgert could be one of the wisest decisions a crypto investor could make today. This is because of the high potential of growing their Bitgert holdings several folds in a short time. The projected exponential growth of the Bitgert price growth this weekend shows the kind of project Bitgert is. This is in contrast with the case of Litecoin and Bitcoin, which are showing a downturn this weekend. Bitgert devs just released the weekly updates this week, and these updates are among the reasons why Bitgert investors’ fortunes are growing fast. This is why crypto experts believe that Bitgert fortunes will skyrocket this weekend while Litecoin and Bitcoin slow down.Bitgert’s fast-growing ecosystem is one of the biggest reasons Bitgert ‘s fortunes will increase this week. The Bitgert ecosystem is currently the fastest growing in terms of products and projects. The adoption of the Bitgert BRC20 blockchain , one of the key products, will be key in growing Bitgert fortunes this week. Bitgert BRC20 chain is commanding a lot of demand for Bitgert due to the zero gas fee and its 100K TPS. Bitgert devs are already in the pursuit of upgrading the BRC20 nodes, which will definitely increase chain adoption. Institutional investors (Venture Capitalists) joining Bitgert will also grow Bitgert fortunes this week. They will grow the Bitgert coin demand, and that’s how the coin price will skyrocket. Higher adoption rates are expected on Bitgert , which translates to price growth. The upcoming decentralized marketplaces and other major Bitgert products in the roadmap V2 will also be key in growing Bitgert fortunes this week. There will also be more projects joining Bitgert from the Startup Studio that will increase the growth of the investors’ chúng tôi is notable that Litecoin is currently working on the implementation of the Lightning Network Protocol along with different applications for smart contracts and privacy. Aside from that, Litecoin’s numerous partnerships and integrations as a payment method are the reason behind the current Litecoin marketcap growth. Bitcoin is also focused on expanding its current network through additional partnerships with Capitalist Ventures. Among the current partners include; BitcoinCash, Blockchair, and chúng tôi The Bitcoin future Marketcap is expected to increase rapidly with the growing number of partners joining Bitcoin. Even with the developments exhibited by Bitcoin and Litecoin so far, Bitgert stands out as an ideal investment this week. Bitgert has the potential to grow its investors’ fortunes bigger than Bitcoin and Litecoin this week.
Update the detailed information about Defi Explodes On Cardano – And $Sponge Forecast Sees A 100X on the Daihoichemgio.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!