Next Page: 25
I hope to chat with an experienced and conscientious person, who will pay attention to details, be flexible and is able to work well independently. $20 an hour
"I hope that guitar music dies. I want it to die a painful death."
The post Polyphia’s Tim Henson Hates Guitar Music, Only Listens to Rap Now appeared first on MetalSucks.
Today is a weird day. I have two books out, a first issue and a last issue. I'll start at the ending...
BLACK HAMMER: AGE OF DOOM #12 hits today and it is the final issue of the series. It's the end of the "farm" storyline which has been the spine of the Black Hammer universe, and was the the story from which everything else has sprung from. Dean Ormston and I started working on Black Hammer in 2014, and here we are five years later, arriving at the end point I had always conceived for the series, and for out main cast of characters; Golden Gail Barbalien, Abraham Slam and the rest. It's always hard to end a story, especially one where you love the characters and the world as much as I love Black Hammer. But this is where the story led me, and this is where is needs to go.
It's not the end of Black Hammer of course. Dean and I are already working on the next part of what we consider our "trilogy". The follow up to Black Hammer and Black Hammer: Age of Doom will focus on the character of Lucy Weber, as she struggles to live up to her role as the new Back Hammer. It also features a cast of all new characters, and some familiar faces.
And starting in December we have SKULLDIGGER & SKELETON BOY, a Black Hammer series I am doing with the incredibly talented Tonci Zonjic (look at that cover!!)
I just received this page of inks from Tonci this morning actually...
So, Age of Doom ends, but Black Hammer will continue! We have lots of cool BH stuff planned for the rest of this year and next.
INFERIOR FIVE #1 also comes out today. This one has been percolating for a long time. I co-write the series with the great Keith Giffen and he draws the main story while I draw 5-page back-up stories in each issue.
This project was proposed to me back at San Diego Comicon in 2017, and I started working on it pretty much right away. So it’s been in the works for over two years.
This is sort of a surreal dream project for me. As I said, Keith is a major influence on me. I grew up reading DC books in the 80’s and 90’s, and Keith was a major creator for DC back then. I read his stuff avidly; his Legion and his Justice League stuff, but also Ambush Bug, Heckler and everything else.
And, as you may have read, this project is a quasi-sequel to DC’s INVASION event from 1988. It is set in the continuity of DC in 1988, and Keith actually co-wrote that series, and I was reading it. Very weird and surreal, which pretty much describes the whole series.
To celebrate today's launch, here is a sneak peek at some of my unlettered and uncoloured inks from issue 5!
Just a note: All of my original artwork, including art from Inferiror Five, and signed books and prints, are available for sale at CADENCE COMIC ART!
And finally FROGCATCHERS! My next original graphic novel Is coming out on September 24 from Simon and Schuster and Gallery 13!
I was a bit overwhelmed, and incredibly grateful, when one of my writing heroes, WARREN ELLIS, read Frogcatchers this past weekend and said some nice things about it in his weekly newsletter...
Hello from out here on the Thames Delta, where I am reading an advance copy of the new Jeff Lemire graphic novel, FROGCATCHERS. It is, as you'd expect, excellent. There's a breathtaking sequence early on where a hand in water morphs into a chest x-ray which morphs again into the face of a man laying on a bed. Absolutely superb. So few people are doing the kind of experimental literary work in comics that Lemire does, and fewer still do it this well. I don't have the kind of space to try that sort of thing any more, but others do, and I really wish they would. We'd all be better off for it.
Edited to add: Jeff asked me for a quote after I thanked him for letting me read it, and here it is:
"A perfect miniature of memory and loss, affecting and beautifully told in an outstanding use of the medium. A haunting dream of a book."
FROGCATCHERS pre-order (UK) (US) - out next month.
Your local comic book store, or book store, may not have pre-ordered Frogcatchers, so make sure you ask them to order a copy!
That's it for me this week. I am deep in my next graphic novel, a follow up to the above Frogcatchers. I have about 180 pages drawn and I anticipate it will be about 220. I hope to finish this one by early December, when the pre-production on the Essex County TV show kicks into high gear.
Until next time...
Comment on ‘Star Trek: Picard’ Gets New Trailer & January Premiere Date On Amazon Prime UK & CBS All Access by David PooleCache
This has got to be the most exciting thing to happen for fans of the ST universe for a very long time. I hope people give it a chance. And see it for what it is, "A TV Show", not a chance to bash actors appearances because they've aged since their last series. Even my Non Trekkie friends are excited to see it, so that has got to be a good sign...
preseason this guy named Jake & Adam were both stanning me hard for some reason I guess because of my afro and they liked that I play hard based on a conversation I was having w/ someone else. "I hope im on a tribe w/ Jonna!" lol
WASHINGTON – They may have his back on impeachment, but some of President Donald Trump’s most loyal allies are suddenly revolting against his decision to pull back U.S. troops from northern Syria.
On Monday, one chief Trump loyalist in Congress called the move “unnerving to the core.” An influential figure in conservative media condemned it as “a disaster.” And Trump’s former top NATO envoy said it was “a big mistake” that would threaten the lives of Kurdish fighters who had fought alongside American troops for years.
Trump’s surprise move, which came with no advance warning late Sunday and stunned many in his own government, threatened to undermine what has been near lockstep support among Republicans. It also came against the backdrop of a congressional impeachment inquiry in which the backing of Republicans in the Senate is the president’s bulwark against being removed from office.
Sen. Lindsey Graham, R-S.C., who has been among Trump’s most vocal defenders, called the Syria decision “a disaster in the making” that would throw the region into chaos and embolden the Islamic State group.
“I hope I’m making myself clear how short-sighted and irresponsible this decision is,” Graham told Fox News. “I like President Trump. I’ve tried to help him. This, to me, is just unnerving to its core.”
Sen. Marco Rubio, R-Fla., who has shrugged off the key allegation in the impeachment inquiry – that Trump pressured foreign powers to investigate a top Democratic rival – tweeted that Trump’s shift on Syria is “a grave mistake that will have implications far beyond Syria.”
And Sen. Susan Collins, R-Maine, who has been more willing than many Republicans to condemn Trump’s calls for foreign intervention in the 2020 election, called the Syria move “a terribly unwise decision” that would “abandon our Kurdish allies, who have been our major partner in the fight against the Islamic State.”
A more frequent Republican Trump critic, Utah Sen. Mitt Romney, cast Trump’s announcement as “a betrayal.”
“It says that America is an unreliable ally; it facilitates ISIS resurgence; and it presages another humanitarian disaster,” Romney tweeted.
Nikki Haley, who was Trump’s hand-picked ambassador to the United Nations, also cast the decision to withdraw U.S. troops from northern Iraq as a betrayal of a key ally.
“The Kurds were instrumental in our successful fight against ISIS in Syria. Leaving them to die is a big mistake,” she wrote on Twitter.
Former Rubio aide Alex Conant highlighted the risks ahead for a president whose political future depends on Republican support.
“For Trump to make a very controversial move on Syria at the exact moment when he needs Senate Republicans more than ever is risky politics,” Conant said, noting the significance for many Senate Republicans of the United States’ policy in northern Syria, where Kurds would be particularly vulnerable to a Turkish invasion.
“They’re not just going to send out a couple of tweets and move on,” Conant said. “At the same time, the White House is going to need these guys to carry a lot of water for them.”
While a number of Republicans criticized Trump’s decision, one of their most important leaders, Senate Majority Leader Mitch McConnell of Kentucky, was sanguine, offering little concern about Syria or impeachment during an appearance at the University of Kentucky.
“There are a few distractions, as you may have noticed,” McConnell said. “But if you sort of keep your head on straight and remember why you were sent there, there are opportunities to do important things for the country and for the states that we represent.”
After the appearance, McConnell issued a statement warning that Trump’s proposed withdrawal “would only benefit Russia, Iran, and the Assad regime. And it would increase the risk that ISIS and other terrorist groups regroup.”
“As we learned the hard way during the Obama Administration, American interests are best served by American leadership, not by retreat or withdrawal,” McConnell said.
Outside government, leaders of conservative groups backed Trump.
Liberty University President Jerry Falwell Jr., a prominent evangelical leader, said Trump was simply “keeping his promise to keep America out of endless wars.”
He suggested Trump could easily reengage in the region if the decision backfires.
“The president has got to do what’s best for the country, whether it helps him with this phony impeachment inquiry or not,” Falwell said in an interview.
Former Trump campaign aide Barry Bennett noted that the president has been talking about reducing troop levels in the Middle East since before the 2016 election.
“I understand that they don’t like the policy, but none of them should be shocked by the policy,” Bennett said. “He’s only been talking about this for four or five years now. I think he’s with the vast majority of the public.”
Still, the backlash from other Trump loyalists was intense.
Rep. Elise Stefanik, R-N.Y., a member of the House Armed Services and Intelligence committees, called it a “misguided and catastrophic blow to our national security interests.”
And on Fox News, a network where many rank-and-file Trump supporters get their news, host Brian Kilmeade said it was “a disaster.”
“Abandon our allies? That’s a campaign promise? Abandon the people that got the caliphate destroyed?” Kilmeade said on “Fox & Friends.”
Bulent Aliriza, director of the Turkey Project at the Center for Strategic and International Studies, said the controversy reminds him of former Defense Secretary James Mattis’ decision to resign late last year after Trump announced plans to withdraw troops from Syria.
“Ultimately, Trump reversed himself,” Aliriza said. “The question is whether he will actually reverse himself again in view of the opposition from Capitol Hill led by several of his closest allies.”
I hope we win but I doubt it with the shambles we are at present and no direction from the clueless Gollum. On the upside, if we lose it's the international break and perhaps he'll do the manly thing and fall on his sword. Then we will have an interim manager and a chance to secure Rodgers, Allegri or Tuchel. Please not Pocchetino; look how Spurs are faring now.
EeveeDreamer / 1 page
Dr. Mann runs his mouth again, and this time I think he’s made a huge mistake. Personally, with what knowledge I have of libel law, I think this is actionable under Canadian law as well as US law, and I hope that Steve McIntyre takes Dr. Mann to task legally. Here is a screencap: The…
Some of the best advice I received was when I was experiencing a lot of pain confronting the abuse I endured from my family and later romantic relationships. I opened my heart to a pastor about the hurt in my life and he told me "you don't have to make yourself feel vulnerable in person and forgive someone face-to-face. Forgiveness can be done at anytime from a safe distance." I never looked at things that way and it helped me let go of pain and guilt I felt throughout my life. As my parents are still alive, although I have distanced myself for my overall wellbeing, I still had the need to reach out because I recently found out my husband and I are expecting a baby of our own and I wanted to say how I felt to lay my feelings to rest once and for all. It was cathartic to write out my feelings and realize how strong of a person I am and how my family has missed out on knowing me thinking I'm good for nothing. I was able to Express how I am passionate about raising my child in a loving environment free of violence and apathy and it felt good to express how I feel without the need for validation and no expectations. The response I got was apathetic and stoic, histocially as they have behaved towards me, they never claimed any responsibility for their actions I expressed in my letter nor validated my feelings (which was the source of my pain for years) but it didn't matter to me because I know I did what I felt I needed to do - I needed to confront the hell they put me through, their lack of love and support and share despite that I forgive them and I still love them because that's the woman I am, I recognize they brought me into this world and now I have the opportunity to do better because if the woman I have become. I am a strong and beautiful being despite them, I didn't have their support then and I don't need it now because I have an inner strength that has sculpted me to be steadfast. I reached out to them to express how I feel and face them at the same opportunity, I did this so when they pass I know I gave them the chance to be open hearted, I forgave them and they chose otherwise (I can release myself from feeling like my feelings are unsolved or have any guilt). I'm a spiritual person so I wanted to honour my parents and treat them as I want to be treated and release myself from feelings of unworthiness and powerlessness because their limiting beliefs don't have to be mine anymore. Everyone copes with pain and trauma differently, and the rejection and abuse by your birth family can be damaging as society puts so much emphasis on close family bonds being the norm. Something that has brought me a lot of peace is that 'family' doesn't have to be defined by those connected to us by DNA, by letting go of my birth family it has made room for real love and peace in my life. I felt a void in my life by not having my parents to stand by me or siblings who cared about me and I filled that void by being a sister and child to those without that connection (seniors, orphaned and neglected children, etc). Everyone copes differently so what works for some may not work for others but for me reaching out and volunteering (formally and informally) has brought a lot of love into my life. I never developed a close bond with my parents but there are hundreds if not thousands of people I opened my heart to and had a genuine connection with, making the world a little brighter. In reading others feedback I do agree there is the danger of making bad choices by feeling "needy" for love and validation - hence how I ended up in abusive relationships over the years, making unhealthy choices. That's why I talk about my experiences and own upto my mistakes to maybe help others from undergoing the pain I experienced when feeling unworthy and seeking validation in all the wrong ways (binge drinking to 'fit in' and 'numb out', having numerous flings and seeking validation from men, etc). I also spoke to counselors and other survivors of abuse to work through the trauma and make healthy choices, it took me a lot of years of pain and suffering to realize I don't need to revictimize myself, I deserve better and need to love myself for me and I need to be selfish and put myself first, approve myself and be proud of myself. I didn't need my family to do that for me, I needed to love myself. Just like forgiving my family, it allowed me to forgive myself for my mistakes and move on. I hope this helps.
I’m excited to be sharing my experience in podcasting with other industry leaders coming up on the 11th of October at The Mill in Bloomington, Indiana at the Flyover Podcast Festival! I hope you can join us! The Flyover Podcast Festival is where the Midwest podcast scene connects and thrives. Experienced and aspiring podcasters alike enjoy a jam-packed day of creative, technical, and business-savvy podcasting innovation at The Mill coworking and incubator space in Bloomington,
The post Flyover Podcast Festival | October 11 | Bloomington, IN appeared first on Martech Zone.
Published on October 7, 2019 5:10 PM UTC
Find all Alignment Newsletter resources here. In particular, you can sign up, or look through this spreadsheet of all summaries that have ever been in the newsletter. I'm always happy to hear feedback; you can send it to me by replying to this email.
Audio version here (may not be up yet).
Towards an empirical investigation of inner alignment (Evan Hubinger) (summarized by Rohin): Last week, we saw that the worrying thing about mesa optimizers (AN #58) was that they could have robust capabilities, but not robust alignment (AN#66). This leads to an inner alignment failure: the agent will take competent, highly-optimized actions in pursuit of a goal that you didn't want.
This post proposes that we empirically investigate what kinds of mesa objective functions are likely to be learned, by trying to construct mesa optimizers. To do this, we need two ingredients: first, an environment in which there are many distinct proxies that lead to good behavior on the training environment, and second, an architecture that will actually learn a model that is itself performing search, so that it has robust capabilities. Then, the experiment is simple: train the model using deep RL, and investigate its behavior off distribution to distinguish between the various possible proxy reward functions it could have learned. (The next summary has an example.)
Some desirable properties:
- The proxies should not be identical on the training distribution.
- There shouldn't be too many reasonable proxies, since then it would be hard to identify which proxy was learned by the neural net.
- Proxies should differ on "interesting" properties, such as how hard the proxy is to compute from the model's observations, so that we can figure out how a particular property influences whether the proxy will be learned by the model.
Rohin's opinion: I'm very excited by this general line of research: in fact, I developed my own proposal along the same lines. As a result, I have a lot of opinions, many of which I wrote up in this comment, but I'll give a summary here.
I agree pretty strongly with the high level details (focusing on robust capabilities without robust alignment, identifying multiple proxies as the key issue, and focusing on environment design and architecture choice as the hard problems). I do differ in the details though. I'm more interested in producing a compelling example of mesa optimization, and so I care about having a sufficiently complex environment, like Minecraft. I also don't expect there to be a "part" of the neural net that is actually computing the mesa objective; I simply expect that the heuristics learned by the neural net will be consistent with optimization of some proxy reward function. As a result, I'm less excited about studying properties like "how hard is the mesa objective to compute".
A simple environment for showing mesa misalignment (Matthew Barnett) (summarized by Rohin): This post proposes a concrete environment in which we can run the experiments suggested in the previous post. The environment is a maze which contains keys and chests. The true objective is to open chests, but opening a chest requires you to already have a key (and uses up the key). During training, there will be far fewer keys than chests, and so we would expect the learned model to develop an "urge" to pick up keys. If we then test it in mazes with lots of keys, it would go around competently picking up keys while potentially ignoring chests, which would count as a failure of inner alignment. This predicted behavior is similar to how humans developed an "urge" for food because food was scarce in the ancestral environment, even though now food is abundant.
Rohin's opinion: While I would prefer a more complex environment to make a more compelling case that this will be a problem in realistic environments, I do think that this would be a great environment to start testing in. In general, I like the pattern of "the true objective is Y, but during training you need to do X to get Y": it seems particularly likely that even current systems would learn to competently pursue X in such a situation.
Technical AI alignment
Machine Learning Projects on IDA (Owain Evans et al) (summarized by Nicholas): This document describes three suggested projects building on Iterated Distillation and Amplification (IDA), a method for training ML systems while preserving alignment. The first project is to apply IDA to solving mathematical problems. The second is to apply IDA to neural program interpretation, the problem of replicating the internal behavior of other programs as well as their outputs. The third is to experiment with adaptive computation where computational power is directed to where it is most useful. For each project, they also include motivation, directions, and related work.
Nicholas's opinion: Figuring out an interesting and useful project to work on is one of the major challenges of any research project, and it may require a distinct skill set from the project's implementation. As a result, I appreciate the authors enabling other researchers to jump straight into solving the problems. Given how detailed the motivation, instructions, and related work are, this document strikes me as an excellent way for someone to begin her first research project on IDA or AI safety more broadly. Additionally, while there are many public explanations of IDA, I found this to be one of the most clear and complete descriptions I have read.
Read more: Alignment Forum summary post
List of resolved confusions about IDA (Wei Dai) (summarized by Rohin): This is a useful post clarifying some of the terms around IDA. I'm not summarizing it because each point is already quite short.
Concrete experiments in inner alignment (Evan Hubinger) (summarized by Matthew): While the highlighted posts above go into detail about one particular experiment that could clarify the inner alignment problem, this post briefly lays out several experiments that could be useful. One example experiment is giving an RL trained agent direct access to its reward as part of its observation. During testing, we could try putting the model in a confusing situation by altering its observed reward so that it doesn't match the real one. The hope is that we could gain insight into when RL trained agents internally represent 'goals' and how they relate to the environment, if they do at all. You'll have to read the post to see all the experiments.
Matthew's opinion: I'm currently convinced that doing empirical work right now will help us understand mesa optimization, and this was one of the posts that lead me to that conclusion. I'm still a bit skeptical that current techniques are sufficient to demonstrate the type of powerful learned search algorithms which could characterize the worst outcomes for failures in inner alignment. Regardless, I think at this point classifying failure modes is quite beneficial, and conducting tests like the ones in this post will make that a lot easier.
Learning human intent
Fine-Tuning GPT-2 from Human Preferences (Daniel M. Ziegler et al) (summarized by Sudhanshu): This blog post and its associated paper describes the results of several text generation/continuation experiments, where human feedback on initial/older samples was used in the form of a reinforcement learning reward signal to finetune the base 774-million parameter GPT-2 language model (AN #46). The key motivation here was to understand whether interactions with humans can help algorithms better learn and adapt to human preferences in natural language generation tasks.
They report mixed results. For the tasks of continuing text with positive sentiment or physically descriptive language, they report improved performance above the baseline (as assessed by external examiners) after fine-tuning on only 5,000 human judgments of samples generated from the base model. The summarization task required 60,000 samples of online human feedback to perform similarly to a simple baseline, lead-3 - which returns the first three sentences as the summary - as assessed by humans.
Some of the lessons learned while performing this research include 1) the need for better, less ambiguous tasks and labelling protocols for sourcing higher quality annotations, and 2) a reminder that "bugs can optimize for bad behaviour", as a sign error propagated through the training process to generate "not gibberish but maximally bad output". The work concludes on the note that it is a step towards scalable AI alignment methods such as debate and amplification.
Sudhanshu's opinion: It is good to see research on mainstream NLProc/ML tasks that includes discussions on challenges, failure modes and relevance to the broader motivating goals of AI research.
The work opens up interesting avenues within OpenAI's alignment agenda, for example learning a diversity of preferences (A OR B), or a hierarchy of preferences (A AND B) sequentially without catastrophic forgetting.
In order to scale, we would want to generate automated labelers through semi-supervised reinforcement learning, to derive the most gains from every piece of human input. The robustness of this needs further empirical and conceptual investigation before we can be confident that such a system can work to form a hierarchy of learners, e.g. in amplification.
Rohin's opinion: One thing I particularly like here is that the evaluation is done by humans. This seems significantly more robust as an evaluation metric than any automated system we could come up with, and I hope that more people use human evaluation in the future.
Preventing bad behavior
Robust Change Captioning (Dong Huk Park et al) (summarized by Dan H): Safe exploration requires that agents avoid disrupting their environment. Previous work, such as Krakovna et al. (AN #10), penalize an agent's needless side effects on the environment. For such techniques to work in the real world, agents must also estimate environment disruptions, side effects, and changes while not being distracted by peripheral and unaffecting changes. This paper proposes a dataset to further the study of "Change Captioning," where scene changes are described by a machine learning system in natural language. That is, given before and after images, a system describes the salient change in the scene. Work on systems that can estimate changes can likely progress safe exploration.
Learning Representations by Humans, for Humans (Sophie Hilgard, Nir Rosenfeld et al) (summarized by Asya): Historically, interpretability approaches have involved machines acting as experts, making decisions and generating explanations for their decisions. This paper takes a slightly different approach, instead using machines as advisers who are trying to give the best possible advice to humans, the final decision makers. Models are given input data and trained to generate visual representations based on the data that cause humans to take the best possible actions. In the main experiment in this paper, humans are tasked with deciding whether to approve or deny loans based on details of a loan application. Advising networks generate realistic-looking faces whose expressions represent multivariate information that's important for the loan decision. Humans do better when provided the facial expression 'advice', and furthermore can justify their decisions with analogical reasoning based on the faces, e.g. "x will likely be repaid because x is similar to x', and x' was repaid".
Asya's opinion: This seems to me like a very plausible story for how AI systems get incorporated into human decision-making in the near-term future. I do worry that further down the line, AI systems where AIs are merely advising will get outcompeted by AI systems doing the entire decision-making process. From an interpretability perspective, it also seems to me like having 'advice' that represents complicated multivariate data still hides a lot of reasoning that could be important if we were worried about misaligned AI. I like that the paper emphasizes having humans-in-the-loop during training and presents an effective mechanism for doing gradient descent with human choices.
Rohin's opinion: One interesting thing about this paper is its similarity to Deep RL from Human Preferences: it also trains a human model, that is improved over time by collecting more data from real humans. The difference is that DRLHP produces a model of the human reward function, whereas the model in this paper predicts human actions.
Other progress in AI
The Principle of Unchanged Optimality in Reinforcement Learning Generalization (Alex Irpan and Xingyou Song) (summarized by Flo): In image recognition tasks, there is usually only one label per image, such that there exists an optimal solution that maps every image to the correct label. Good generalization of a model can therefore straightforwardly be defined as a good approximation of the image-to-label mapping for previously unseen data.
In reinforcement learning, our models usually don't map environments to the optimal policy, but states in a given environment to the corresponding optimal action. The optimal action in a state can depend on the environment. This means that there is a tradeoff regarding the performance of a model in different environments.
The authors suggest the principle of unchanged optimality: in a benchmark for generalization in reinforcement learning, there should be at least one policy that is optimal for all environments in the train and test sets. With this in place, generalization does not conflict with good performance in individual environments. If the principle does not initially hold for a given set of environments, we can change that by giving the agent more information. For example, the agent could receive a parameter that indicates which environment it is currently interacting with.
Flo's opinion: I am a bit torn here: On one hand, the principle makes it plausible for us to find the globally optimal solution by solving our task on a finite set of training environments. This way the generalization problem feels more well-defined and amenable to theoretical analysis, which seems useful for advancing our understanding of reinforcement learning.
On the other hand, I don't expect the principle to hold for most real-world problems. For example, in interactions with other adapting agents performance will depend on these agents' policies, which can be hard to infer and change dynamically. This means that the principle of unchanged optimality won't hold without precise information about the other agent's policies, while this information can be very difficult to obtain.
More generally, with this and some of the criticism of the AI safety gridworlds that framed them as an ill-defined benchmark, I am a bit worried that too much focus on very "clean" benchmarks might divert from issues associated with the messiness of the real world. I would have liked to see a more conditional conclusion for the paper, instead of a general principle.
And, of course, it's the double-barreled hypocrisy. There's the eco-hypocrisy of the Democratic leader who wags her finger at the rest of us for our too-big carbon footprints, and crusades for massive taxes and regulation to reduce global warming. Then there's the Bay Area hypocrisy of the woman who represents one of the most anti-military areas of the country soaking up military resources to shuttle her (and her many family members) across the country almost every weekend.
Remember: Pelosi's San Francisco is notorious for banning the Marines' Silent Drill Platoon from filming a recruitment commercial on its streets; killing the JROTC program in the public schools; blocking the retired battleship U.S.S. Iowa from docking in its waters; and attacking the Navy's Blue Angels -- which left-wing activists have tried to banish from northern California skies for the past two years.
Apparently, those anti-war protesters have no problem with evil military jets currying Pelosi and her massive entourages to the funerals of the late Rep. Stephanie Tubbs Jones and Charlie Norwood; foreign junkets to Rome; and politicized stops to Iowa flood sites to bash the Bush administration. One exasperated Department of Defense official, besieged with itinerary changes and shuttle requests back and forth between San Francisco International Airport and Andrews Air Force Base for Pelosi, her daughter, son-in-law and grandchild, wrote in an e-mail:
"They have a history of canceling many of their past requests. Any chance of politely querying (Pelosi's team) if they really intend to do all of these or are they just picking every weekend? ... (T)here's no need to block every weekend 'just in case.'"
Another official pointed out the "hidden costs" associated with the speaker's last-minute changes and cancellations. "We have ... folks prepping the jets and crews driving in (not a short drive for some), cooking meals and preflighting the jets etc." Upset that a specific type of aircraft was not available to her boss, a Pelosi staffer carped to the DoD coordinators: "This is not good news, and we will have some very disappointed folks, as well as a very upset speaker."
Three months ago, turmoil erupted over Queen Nancy's demand for the military to reposition her plane to fly out of Travis Air Force Base in Fairfield, Calif., closer to where she had "business," instead of San Francisco Airport/SFO (1.5 hours away). A special air missions official wrote: "We have never done this in the past. The deal is ... that the Speaker shuttle is from D.C. to SFO and back. We will not reposition. We do not reposition for convenience even for the SECDEF. It is not (too) far of a drive from Travis to SFO. Did the escort suggest to the speaker that this is OK? If so, I hope you guys correct them immediately. If you agree with me that I am correct, then you need to stay strong and present the facts to the speaker's office."
Another official stated bluntly: "We can't reposition the airplane such a short distance. It is not a judicial use of the asset. It is too expensive to operate the jet when there is truly no need to do so."
A beleaguered colleague responded: "(Y)ou know I understand and feel with you ... but this is a battle we are bound to lose if we tell the speaker office. In the end, this is what will happen. ... I wish that I could say this is a one-time request, but we know it will probably happen again in the future."
In the end, the military won that battle. But a few days later, Pelosi was back with a new demand: that her military plane taking her from D.C. to San Francisco make a stop in New Jersey to bring her and three Democrats to an "innovation forum" at Princeton University involving 21 participants and no audience. A Gulfstream jet was secured for the important "official business."
No word on whether Pelosi required vanilla-scented candles, Evian water and fresh white lilies aboard the flight. But rest assured: Air Diva traveled in style, courtesy of your tax dollars and the forbearance of the U.S. military.
Yep, Jake Owen spilled the beans about singing at Pearce's wedding ... but did you know that Pearce's "I Hope You're Happy Now" duet partner Lee Brice also had a wedding song slip-up of his own?
Sure thing! This was a pretty cool piece of technology, I hope it works great for you and welcome any feedback or additions for others here who might be interested. It sounds like you've ordered one? If so, which ebike do you plan to use it with :D
Absolutely! Glad things are smoothing out for you, Robin. Enjoy the bike, I hope it works great and is reliable and you have lots of fun :D
Women Infinity Scarf / Cotton Scarf / Unisex / Bicycle Scarf / Womens Fashion Accessories /Gift For Her ,Mothers Day Gifts ,Mom Gifts by senoAccessoryCache
unisex bicycle scarf / women infinity scarf / red bicycle and old bicycle / brown bicycle scarf eternity scarves / fashion accessories / cotton linen scarf / unisex senoAccessory
Crochet Bikini , Swimwear Crochet beach shorts crochet bikini top 2018 Summer Fashion Swimsuits by senoAccessoryCache
Beach short & crochet bikini top
Crochet Bikini , Burnt Orange Two Piece Swimsuits Womens Swimwear Crochet Brazilian Bikini Bathingsuit Beachwear / For Her // senoaccessory by senoAccessoryCache
Crochet Swimwear Crochet Bikini Set Swimsuits 2019 Beach Fashion by senoaccessory
Crochet Top ,White Halter top ,Crochet Festival Top , Womens Swimwear Swimsuits Crochet Bikini Top Summer Beach / senoaccessory by senoAccessoryCache
white halter top crochet halter top halter tank top hippie women halter bikini crop top summer wear senoAccessory
Convertible Mittens Fingerless Gloves Hand-Knitted Gloves Winter Fashion Women Accessories Women Gloves Winter Gloves /For Her / Gift Women by senoAccessoryCache
READY TO SHIP !!!!! convertible fingerless mittens / colorful fingerless gloves / crochet knit gloves ,batik design senoAccessory
valentine’s day gift for him Heart Gloves Fingerless Gloves Knit Gloves Winter Accessories /Mom Gifts / Gifts For Women / Bestfriend gifts by senoAccessoryCache
Red Knit Fingerless Gloves Heart Gloves Mittens Arm Warmer Hand Warmer Womens Fashion Accessories Winter Gift For Her Valentines Day Gifts
Gift For Her Knit Infinity Scarf Scarves For Women Winter Accessories Chunky Knit Scarf Winter Scarf Gift For Her For Best Friend by senoAccessoryCache
Chunky white infinity scarf ,knitted scarves , unisex ,men scarf , women ,eternity scarf cowl neck ,neckwarmer ,Mohair scarf , senoAccessory winter fashion,winter accessories, knitted scarves, holiday gifts, clothing, infinity scarf, knitted scarf, eternity scarf,women scarves, men scarves, winter scarf, cowl,winter scarf trends