SHAFAQNA (Shia International News Association) - The Liberties and Human Rights Department (LHRD) in al-Wefaq National Islamic Society issued a report highlighting the wide violations Bahrain military and security authorities committed in concurrence with the Formula 1 race from 13th to 22nd April, 2013.The report clarified that the violations took place one week before the Formula 1 race started in concurrence with the race's preparations and continued for 10 days.
The report mentioned that 132 arrests (105 males, 2 females and 24 children) have taken place. While 69 houses were arrested and 27 citizens were injured and 33 cases of torture by the regime forces were reported.
The report also stated that more than 27 areas were subjected to collective punishment by the use of toxic gasses and the forces' provocative presence in residential areas. In addition to security pursuits in alleys.
First: Arrest cases
132 citizens have been arrested (105 males, 2 females and 24 children), while 40 detainees were released during the same period. The LHRD noticed that the number of arrests increased during the Formula 1 race period between 19thand 21st of the current month. It said 43 were arrested on Friday 19th and 45 were arrested on Sunday 21st from different areas.
The report confirmed that all arrest cases took place without legal arrest warrants or inspection orders in private house raids. A bug number of the detainees were made to stand before the Public Prosecution which then issued orders to keep them in detention for more than one month while most were not given a chance to call lawyers. The detainees told their family members that they were subjected to mistreatment and harassment; some said they were subjected to torture.
Second: Torture and injury cases
Citizens were subjected to the regime forces attacks which left 27 with injuries by the use of fire arms (birdshot guns) and by weaponizing teargas canisters to injure the protesters. The reported injuries varied from moderate to serious, some were to the face and head.
A number of citizens were subjected to torture, 33 cases were documented, as the forces tend to brutally assault protesters during arrest. The forces beat the protesters with batons and guns to retaliate, according to detainees.
Third: House raids and collective punishment
The regime forces raided 69 private houses and buildings in different areas over ten days (between 13th- 22nd April 2013) claiming to be searching for "suspects". In most of the cases, the LHRD documented vandalism and robbery of private property and belongings as well as breaking doors. In some cases, inhabitants were beaten, pepper-sprayed and insulted. One case of firing live ammunition on three houses by unknown persons was documented in the village of Aali. The firing resulted in material damage in the houses.
More than 27 cases of collective punishment were documented. Residential areas were rained with teargas and a state of insecurity was imposed in those areas.
Fourth: Attachments; video footages
1. The regime forces point bisrdshot guns from house rooftops
2. Intensive firing on houses at night
3. Rescuing a family which's house was targeted with toxic teargas
4. Brutal arrest of youth
5. Children beated for no reason
6. Regime forces arrest a child
source : Alwefaq
SHAFAQNA (Shia International News Association) – Boston Marathon bombing suspect Dzhokhar Tsarnaev has been arretsed following an extensive manhunt that ended in the Boston suburb, Watertown. Law enforcement units from around the country were involved in the search.
The crowd around the standoff scene in Watertown burst into cheers as it became clear that Tsarnaev had been taken into custody following reports that a negotiator was on site.
He will be transported to Mount Auburn Hospital, the same facility where a police officer shot in a standoff with the Tsarnaevs is recovering, the Boston Globe reports. Tsarnaev is listed in “serious, if not critical condition” after suffering gunshot wounds to the neck and leg, according to CBS News.-www.shafaqna.com/English
SHAFAQNA (Shia International News Association) – US authorities have arrested a suspect from Mississippi in connection with a letter that tested positive for the poison ricin that was sent to President Barack Obama, a law enforcement source has said.
Secret Service spokesman Ed Donovan said on Wednesday that the letter was intercepted at a facility away from the White House, adding that the letter was received on Tuesday.
"This facility routinely identifies letters or parcels that require secondary screening or scientific testing before delivery," Donovan said.
"The Secret Service is working closely with the US Capitol Police and the FBI in this investigation."
The FBI said late on Wednesday it had arrested Paul Kevin Curtis, of Corinth, Mississippi, in connection with the letters.
The Federal Bureau of Investigation said preliminary tests on a letter sent to President Barack Obama indicated the presence of ricin.
But the FBI statement added: "There is no indication of a connection to the attack in Boston," where three people were killed in bombings at the Boston Marathon on Monday.
The letter is undergoing further testing because preliminary field tests can be unreliable, creating false positives.
Al Jazeera's Patty Culhane, reporing from Washington DC, said: "It will take up to 48 hours for them to find out if it is ricin."
It came after legislators said a different letter was mailed to Mississippi Senator Roger Wicker that tested positive for ricin.
A law enforcement official, speaking on condition of anonymity, said the letter to Obama was very similar to the one mailed to Wicker.
Michigan Senator Carl Levin has also said his regional office in his state received a suspicious letter and that authorities have been alerted.
Levin said in a statement that an aide received the letter on Wednesday, but did not open it. Authorities are now investigating.
The Democratic legislator said he and his staff do not know if the mail presented a threat.
The episode also recalled the mysterious series of letters laced with anthrax that were sent to lawmakers and some journalists following the September 11 attacks in 2001 which killed five people and sickened 17 others.
Tensions have been high in Washington and across the country since the deadly bombings on Monday at the Boston Marathon that killed three people and injured more than 170. -www.shafaqna.com/English
Source: Al Jazeera
SHAFAQNA (Shia International News Association) -- The police have a very bright future ahead of them – and not just because they can now look up potential suspects on Google. As they embrace the latest technologies, their work is bound to become easier and more effective, raising thorny questions about privacy, civil liberties, and due process.
For one, policing is in a good position to profit from "big data". As the costs of recording devices keep falling, it's now possible to spot and react to crimes in real time. Consider a city like Oakland in California. Like many other American cities, today it is covered with hundreds of hidden microphones and sensors, part of a system known as ShotSpotter, which not only alerts the police to the sound of gunshots but also triangulates their location. On verifying that the noises are actual gunshots, a human operator then informs the police.
It's not hard to imagine ways to improve a system like ShotSpotter. Gunshot-detection systems are, in principle, reactive; they might help to thwart or quickly respond to crime, but they won't root it out. The decreasing costs of computing, considerable advances in sensor technology, and the ability to tap into vast online databases allow us to move from identifying crime as it happens – which is what the ShotSpotter does now – to predicting it before it happens.
Instead of detecting gunshots, new and smarter systems can focus on detecting the sounds that have preceded gunshots in the past. This is where the techniques and ideologies of big data make another appearance, promising that a greater, deeper analysis of data about past crimes, combined with sophisticated algorithms, can predict – and prevent – future ones. This is a practice known as "predictive policing", and even though it's just a few years old, many tout it as a revolution in how police work is done. It's the epitome of solutionism; there is hardly a better example of how technology and big data can be put to work to solve the problem of crime by simply eliminating crime altogether. It all seems too easy and logical; who wouldn't want to prevent crime before it happens?
Police in America are particularly excited about what predictive policing – one of Time magazine's best inventions of 2011 – has to offer; Europeans are slowly catching up as well, with Britain in the lead. Take the Los Angeles Police Department (LAPD), which is using softwarecalled PredPol. The software analyses years of previously published statistics about property crimes such as burglary and automobile theft, breaks the patrol map into 500 sq ft zones, calculates the historical distribution and frequency of actual crimes across them, and then tells officers which zones to police more vigorously.
It's much better – and potentially cheaper – to prevent a crime before it happens than to come late and investigate it. So while patrolling officers might not catch a criminal in action, their presence in the right place at the right time still helps to deter criminal activity. Occasionally, though, the police might indeed disrupt an ongoing crime. In June 2012 the Associated Press reported on an LAPD captain who wasn't so sure that sending officers into a grid zone on the edge of his coverage area – following PredPol's recommendation – was such a good idea. His officers, as the captain expected, found nothing; however, when they returned several nights later, they caught someone breaking a window. Score one for PredPol?
Trials of PredPol and similar software began too recently to speak of any conclusive results. Still, the intermediate results look quite impressive. In Los Angeles, five LAPD divisions that use it in patrolling territory populated by roughly 1.3m people have seen crime decline by 13%. The city of Santa Cruz, which now also uses PredPol, has seen its burglaries decline by nearly 30%. Similar uplifting statistics can be found in many other police departments across America.
Other powerful systems that are currently being built can also be easily reconfigured to suit more predictive demands. Consider the New York Police Department's latest innovation – the so-called Domain Awareness System – which syncs the city's 3,000 closed-circuit camera feeds with arrest records, 911 calls, licence plate recognition technology, and radiation detectors. It can monitor a situation in real time and draw on a lot of data to understand what's happening. The leap from here to predicting what might happen is not so great.
If PredPol's "prediction" sounds familiar, that's because its methods were inspired by those of prominent internet companies. Writing in The Police Chief magazine in 2009, a senior LAPD officer lauded Amazon's ability to "understand the unique groups in their customer base and to characterise their purchasing patterns", which allows the company "not only to anticipate but also to promote or otherwise shape future behaviour". Thus, just as Amazon's algorithms make it possible to predict what books you are likely to buy next, similar algorithms might tell the police how often – and where – certain crimes might happen again. Ever stolen a bicycle? Then you might also be interested in robbing a grocery store.
Here we run into the perennial problem of algorithms: their presumed objectivity and quite real lack of transparency. We can't examine Amazon's algorithms; they are completely opaque and have not been subject to outside scrutiny. Amazon claims, perhaps correctly, that secrecy allows it to stay competitive. But can the same logic be applied to policing? If no one can examine the algorithms – which is likely to be the case as predictive-policing software will be built by private companies – we won't know what biases and discriminatory practices are built into them. And algorithms increasingly dominate many other parts of our legal system; for example, they are also used to predict how likely a certain criminal, once on parole or probation, is to kill or be killed. Developed by a University of Pennsylvania professor, this algorithm has been tested in Baltimore, Philadelphia and Washington DC. Such probabilistic information can then influence sentencing recommendations and bail amounts, so it's hardly trivial.
But how do we know that the algorithms used for prediction do not reflect the biases of their authors? For example, crime tends to happen in poor and racially diverse areas. Might algorithms – with their presumed objectivity – sanction even greater racial profiling? In most democratic regimes today, police need probable cause – some evidence and not just guesswork – to stop people in the street and search them. But armed with such software, can the police simply say that the algorithms told them to do it? And if so, how will the algorithms testify in court? Techno-utopians will probably overlook such questions and focus on the abstract benefits that algorithmic policing has to offer; techno-sceptics, who start with some basic knowledge of the problems, constraints and biases that already pervade modern policing, will likely be more critical.
Legal scholar Andrew Guthrie Ferguson has studied predictive policing in detail. Ferguson cautions against putting too much faith in the algorithms and succumbing to information reductionism. "Predictive algorithms are not magic boxes that divine future crime, but instead probability models of future events based on current environmental vulnerabilities," he notes.
But why do they work? Ferguson points out that there will be future crime not because there was past crime but because "the environmental vulnerability that encouraged the first crime is still unaddressed". When the police, having read their gloomy forecast about yet another planned car theft, see an individual carrying a screwdriver in one of the predicted zones, this might provide reasonable suspicion for a stop. But, as Ferguson notes, if the police arrested the gang responsible for prior crimes the day before, but the model does not yet reflect this information, then prediction should be irrelevant, and the police will need some other reasonable ground for stopping the individual. If they do make the stop, then they shouldn't be able to say in court, "The model told us to." This, however, may not be obvious to the person they have stopped, who has no familiarity with the software and its algorithms.
Then there's the problem of under-reported crimes. While most homicides are reported, many rapes and home break-ins are not. Even in the absence of such reports, local police still develop ways of knowing when something odd is happening in their neighbourhoods. Predictive policing, on the other hand, might replace such intuitive knowledge with a naive belief in the comprehensive power of statistics. If only data about reported crimes are used to predict future crimes and guide police work, some types of crime might be left unstudied – and thus unpursued.
What to do about the algorithms then? It is a rare thing to say these days but there is much to learn from the financial sector in this regard. For example, after a couple of disasters caused by algorithmic trading in August 2012, financial authorities in Hong Kong and Australia drafted proposals to establish regular independent audits of the design, development and modification of the computer systems used for algorithmic trading. Thus, just as financial auditors could attest to a company's balance sheet, algorithmic auditors could verify if its algorithms are in order.
As algorithms are further incorporated into our daily lives – from Google's Autocomplete to PredPol – it seems prudent to subject them to regular investigations by qualified and ideally public-spirited third parties. One advantage of the auditing solution is that it won't require the audited companies publicly to disclose their trade secrets, which has been the principal objection – voiced, of course, by software companies – to increasing the transparency of their algorithms.
The police are also finding powerful allies in Silicon Valley. Companies such as Facebook have begun using algorithms and historical data to predict which of their users might commit crimes using their services. Here is how it works: Facebook's own predictive systems can flag certain users as suspicious by studying certain behavioural cues: the user only writes messages to others under 18; most of the user's contacts are female; the user is typing keywords like "sex" or "date." Staffers can then examine each case and report users to the police as necessary. Facebook's concern with its own brand here is straightforward: no one should think that the platform is harbouring criminals.
In 2011 Facebook began using PhotoDNA, a Microsoft service that allows it to scan every uploaded picture and compare it with child-porn images from the FBI's National Crime Information Centre. Since then it has expanded its analysis beyond pictures as well. In mid-2012 Reuters reported on how Facebook, armed with its predictive algorithms, apprehended a middle-aged man chatting about sex with a 13-year-old girl, arranging to meet her the day after. The police contacted the teen, took over her computer, and caught the man.
Facebook is at the cutting edge of algorithmic surveillance here: just like police departments that draw on earlier crime statistics, Facebook draws on archives of real chats that preceded real sex assaults. Curiously, Facebook justifies its use of algorithms by claiming that they tend to be less intrusive than humans. "We've never wanted to set up an environment where we have employees looking at private communications, so it's really important that we use technology that has a very low false-positive rate," Facebook's chief of security told Reuters.
It's difficult to question the application of such methods to catching sexual predators who prey on children (not to mention that Facebook may have little choice here, as current US child-protection laws require online platforms used by teens to be vigilant about predators). But should Facebook be allowed to predict any other crimes? After all, it can easily engage in many other kinds of similar police work: detecting potential drug dealers, identifying potential copyright violators (Facebook already prevents its users from sharing links to many file-sharing sites), and, especially in the wake of the 2011 riots in the UK, predicting the next generation of troublemakers. And as such data becomes available, the temptation to use it becomes almost irresistible.
That temptation was on full display following the rampage in a Colorado movie theatre in June 2012, when an isolated gunman went on a killing spree, murdering 12 people. A headline that appeared in the Wall Street Journal soon after the shooting says it all: "Can Data Mining Stop the Killing?" It won't take long for this question to be answered in the affirmative.
In many respects, internet companies are in a much better position to predict crime than police. Where the latter need a warrant to assess someone's private data, the likes of Facebook can look up their users' data whenever they want. From the perspective of police, it might actually be advantageous to have Facebook do all this dirty work, because Facebook's own investigations don't have to go through the court system.
While Facebook probably feels too financially secure to turn this into a business – it would rather play up its role as a good citizen – smaller companies might not resist the temptation to make a quick buck. In 2011 TomTom, a Dutch satellite-navigation company that has now licensed some of its almighty technology to Apple, found itself in the middle of a privacy scandal when it emerged that it had been selling GPS driving data collected from customers to the police. Privacy advocate Chris Soghoian has likewise documented the easy-to-use "pay-and-wiretap" interfaces that various internet and mobile companies have established for law enforcement agencies.
Publicly available information is up for grabs too. Thus, police are already studying social-networking sites for signs of unrest, often with the help of private companies. The title of a recent brochure from Accenture urges law enforcement agencies to "tap the power of social media to drive better policing outcomes". Plenty of companies are eager to help. ECM Universe, a start-up from Virginia, US, touts its system, called Rapid Content Analysis for Law Enforcement, which is described as "a social media surveillance solution providing real-time monitoring of Twitter, Facebook, Google groups, and many other communities where users express themselves freely".
"The solution," notes the ECM brochure, "employs text analytics to correlate threatening language to surveillance subjects, and alert investigators of warning signs." What kind of warning signs? A recent article in the Washington Post notes that ECM Universe helped authorities in Fort Lupton, Colorado, identify a man who was tweeting such menacing things as "kill people" and "burn [expletive] school". This seems straightforward enough but what if it was just "harm people" or "police suck"?
As companies like ECM Universe accumulate extensive archives of tweets and Facebook updates sent by actual criminals, they will also be able to predict the kinds of non-threatening verbal cues that tend to precede criminal acts. Thus, even tweeting that you don't like your yoghurt might bring police to your door, especially if someone who tweeted the same thing three years before ended up shooting someone in the face later in the day.
However, unlike Facebook, neither police nor outside companies see the whole picture of what users do on social media platforms: private communications and "silent" actions – clicking links and opening pages – are invisible to them. But Facebook, Twitter, Google and similar companies surely know all of this – so their predictive power is much greater than the police's. They can even rank users based on how likely they are to commit certain acts.
An apt illustration of how such a system can be abused comes from The Silicon Jungle, ostensibly a work of fiction written by a Google data-mining engineer and published by Princeton University Press – not usually a fiction publisher – in 2010. The novel is set in the data-mining operation of Ubatoo – a search engine that bears a striking resemblance to Google – where a summer intern develops Terrorist-o-Meter, a sort of universal score of terrorism aptitude that the company could assign to all its users. Those unhappy with their scores would, of course, get a chance to correct them – by submitting even more details about themselves. This might seem like a crazy idea but – in perhaps another allusion to Google – Ubatoo's corporate culture is so obsessed with innovation that its interns are allowed to roam free, so the project goes ahead.
To build Terrorist-o-Meter, the intern takes a list of "interesting" books that indicate a potential interest in subversive activities and looks up the names of the customers who have bought them from one of Ubatoo's online shops. Then he finds the websites that those customers frequent and uses the URLs to find even more people – and so on until he hits the magic number of 5,000. The intern soon finds himself pursued by both an al-Qaida-like terrorist group that wants those 5,000 names to boost its recruitment campaign, as well as various defence and intelligence agencies that can't wait to preemptively ship those 5,000 people to Guantánamo.
We don't know if Facebook has some kind of Paedophile-o-Meter. But, given the extensive user analysis it already does, it probably wouldn't be very hard to build one –and not just for scoring paedophiles. What about Drug-o-Meter? Or – Joseph McCarthy would love this – Communist-o-Meter? Given enough data and the right algorithms, all of us are bound to look suspicious. What happens, then, when Facebook turns us – before we have committed any crimes – over to the police? Will we, like characters in a Kafka novel, struggle to understand what our crime really is and spend the rest of our lives clearing our names? Will Facebook perhaps also offer us a way to pay a fee to have our reputations restored? What if its algorithms are wrong?
The promise of predictive policing might be real, but so are its dangers. The solutionist impulse needs to be restrained. Police need to subject their algorithms to external scrutiny and address their biases. Social networking sites need to establish clear standards for how much predictive self-policing they'll actually do and how far they will go in profiling their users and sharing this data with police. While Facebook might be more effective than police in predicting crime, it cannot be allowed to take on these policing functions without also adhering to the same rules and regulations that spell out what police can and cannot do in a democracy. We cannot circumvent legal procedures and subvert democratic norms in the name of efficiency alone.
SHAFAQNA (Shia International News Association) -- Police have arrested around 150 people accused of burning dozens of Christian houses in eastern Pakistan after a non-Muslim was accused of making offensive comments about Islam's Prophet Muhammad, police said Sunday as Christians rallied against the destruction.
The Christian demonstrators blocked a main highway in Lahore and police fired tear gas shells to disperse the protesters who demanded assistance from the government.
Government spokesman Pervaiz Rasheed promised the government would help hem rebuild their houses, but the Christians expressed dissatisfaction with the way the government was handling the incident.
"I have been robbed of all of my life's savings," Yousuf Masih said, standing close to his burned house. He said the government's announcement that it would give 200,000 rupees ($2,000) compensation to each family was a joke.
The incident began on Friday after a Muslim accused a Christian man of blasphemy — an offense that in Pakistan is punished by life in prison or death. On Saturday, a mob of angry Muslims rampaged through the Christian neighborhood, burning about 170 houses.
The Christian man is in police custody pending an investigation into the allegations.
Those who rioted are being investigated for alleged arson, robbery, theft, and terrorism, said police officer Abdur Rehman. The Pakistani police usually arrest rioters to tamp down public anger, but those accused are rarely convicted.
The law is often misused to settle personal scores and rivalries.
Akram Gill, a local bishop in the Lahore Christian community, said the incident had more to do with personal enmity between two men — one Christian and one Muslim — than blasphemy. He said the men got into a brawl after drinking late one night, and in the morning the Muslim man made up the blasphemy story as payback.
Such accusations of blasphemy in Pakistan can prompt huge crowds to take the law into their own hands. Once an accusation is made it's difficult to get it reversed, partly because law enforcement officials and politicians do not want to be seen as being soft on blasphemers.
According to Human Rights Watch, there are at least 16 people on death row for blasphemy and another 20 are serving life sentences.
Last year, there was a rare reversal of a blasphemy case. A teenage Christian girl with suspected mental disabilities was accused of burning pages of the Quran. But she was later released after a huge domestic and international outcry about her treatment. A local cleric where she lived was arrested and accused of planting the pages in her bag to incriminate her, a rare example of the accuser facing legal consequences. However, he was later freed on bail.
Also on Sunday, a suspected U.S. missile strike killed a foreign militant who was riding on horseback in Datta Khel in North Waziristan, according to three Pakistani intelligence officials who spoke anonymously because they were not authorized to talk to the media.
SHAFAQNA (Shia International News Association) – Sudanese security officials say they have arrested 13 people, including the former head of the country's powerful intelligence service, over an alleged plot to sabotage national security within the African state.
Salah Gosh, former head of the intelligence and security agency, was arrested with others on suspicion of "inciting chaos", "targeting" some leaders and spreading rumours about President Omar Hassan al-Bashir's health, the Sudanese information minister said on Thursday.
"Thirteen people were arrested," Ahmed Belal Osman, the minister, said.
"The situation is now totally stable."
Al Jazeera's Harriet Martin, reporting from Khartoum, said they were accused of "planning to incite chaos in Sudan at a very sensitive time".
Witnesses described seeing armoured vehicles and troops in the tightly controlled centre of the capital, Khartoum, in the early hours of Thursday, although news agencies said there was no increase in security later on.
Sudan's security and intelligence agency "foiled a sabotage plot this morning aimed at bringing about security disturbances in the country led by figures from the opposition forces", the Sudanese Media Centre reported on its Arabic-language website.
Quoting a security source, the media centre said authorities arrested "military and civilian figures" in connection with the plot to destabilise the country.
Bashir has maintained a near 25-year hold on power, even as a series of uprisings troubled the country's poor border areas, including the conflict-torn region of Darfur.
But Sudan has been stuck in economic crisis since the south - the source of most of its known oil-reserves - declared
independence last year under the terms of a peace deal.
High prices for food and other basics have added to widespread public anger over losing the south and have emboldened opposition activists to call for protests. Analysts say the crisis has also exacerbated divisions in the government.
Unrest over price rises and food and fuel shortages has preceded coups to overthrow the government in Sudan in the past.
Sudan has been plagued by political conflicts and crises for most of its history since independence from Britain in 1956.
Decades of civil war between the north and south culminated with South Sudan's independence in July last year under a 2005 peace deal.
Tensions in both nations and between the two states have been high since then. The two countries accused one another of incursions in disputed border zones on Wednesday, a setback to recent security and border deals.
Small demonstrations against cuts in fuel subsidies and other austerity measures broke out across Sudan in June but decreased after a security crackdown and the start of the Muslim fasting month of Ramadan.– www.shfaqna.com/English
SHAFAQNA (Shia International News Association) — A Syrian passenger plane which was forced to land sits at Esenboga airport in Ankara. The plane was allowed to leave after a weapons inspection. .www.shafaqna.com/English
SHAFAQNA (Shia International News Association) — Beni Suef Imam files law suit against two Christian boys for allegedly tearing up Quranic verses; father of children pleas kids are illiterate and were playing with papers near a garbage dump
Two Coptic Christian children Nabil Nagy Rizk, 10, and Mina Nady Farag, 9, were arrested, Tuesday, for insulting religion in the Upper Egyptian governorate of Beni Suef, after the imam of their local mosque filed a complaint against them.
By order of the prosecution the two boys are now being held in the Beni Suef juvenile detention pending further investigation on Sunday.
Ibrahim Mohamed Ali, the village imam, accused the children of tearing up pages of the Quran.
According to Ahram Online reporter in the area, Ali initially took the children to the church and requested that the priest punish them.
Unsatisfied with the church's decision not to castigate the two boys, Ali, together with three other villagers, turned to the courts.
Nabil's father Nagy Rizk defended the action of the boys in a public statement, explaining that they are illiterate and therefore did not know the content of the papers which they found in a small white bag, as they were playing near a pile of rubbish in the street.
The events in Beni Suef come after a wave of arrests across Egypt of several individuals after they were accused by others of "committing blasphemy."
Most of those arrested were Copts accused of "insulting Islam."
Earlier this month in Sohag, a Copt school teacher, Bishoy Kamel, was sentenced to six years in prison for posting cartoons deemed defamatory to Islam and Prophet Mohammed on social-networking site Facebook, as well as for insulting President Mohamed Morsi and his family.
This followed the arrest of a Coptic man, 25-year-old Albert Saber, on 13 September, who was charged with insulting religion for allegedly posting the controversial anti-Islam short film, Innocence of Muslims, also on his Facebook page.
Saber, who was referred to Marg Misdemeanor Court, is still in detention awaiting trial.— www.shafaqna.com/English
SHAFAQNA (Shia International News Association) — A member of a leading human rights organization has been beaten and arrested in a southern Algerian city.
The lawyer for Yacine Zaid said Tuesday that his client had been punched and beaten by police when they arrested him at a road block in Ouargla , 700 kilometers (435 miles) south of the capital. The lawyer, Sidhoum Mohamed Amine, said Zaid was arrested Monday on the ground that he had shown a lack of respect for police.
The daily El Watan reported that Zaid would go on trial Monday for "humiliating" and "striking a police officer." The paper quoted an eyewitness, Abdelmalek Aibek Eg Sahli, a representative of a hotel union, as saying that Zaid "wasn't aggressive and I don't see why he is accused of that."
Amine said in an interview with The Associated Press that police often use "verbal aggression" to justify an arrest.
Zaid, a well-known blogger, is a member of the Algerian League for the Defense of Human Rights. He was detained in Algiers on Sept. 25 after a support rally for another human rights militant on a hunger strike, but later released.— www.shafaqna.com/English
SHAFAQNA (Shia International News Association) — Mona Eltahawy, who was arrested for defacing the poster on the New York subway. Photograph: Dan Callister
Mona Eltahawy, the prominent Egyptian-American writer and activist, has been arrested in New York after spraying paint over a controversial poster on the subway that has been condemned for equating Muslims with "savages".
The posters were put up in the city by the anti-Muslim American Freedom Defense Initiative, led by Pam Geller. They were approved by a US court, which ruled that they were "political" statements and protected by the first amendment, which guarantees free speech.
The poster states: "In any war between the civilized man and the savage, support the civilized man." Between two Stars of David, it adds: "Support Israel. Defeat Jihad."
Mona Eltahawy spray-paints the poster on the subway
Eltahawy was arrested after a supporter of Geller's initiative attempted to prevent her defacing the sign with a purple aerosol.
The posters are now displayed in 10 New York stations – including Grand Central and Times Square – after a court ruled that the local transport authority could not refuse the ads.
In a video posted online of the incident by the New York Post, Mona Eltahawy can be seen attempting to paint over the poster before she is tackled by a woman with a camera, who is identified as Pamela Hall.
"Mona, do you think you have the right to do this?" Eltahawy is asked. "I do actually," Eltahawy replies, adding: "I think this is freedom of expression, just as [the ad] is freedom of expression."
As the scuffle continues two police officers appear to then arrest Eltahawy, who says: "This is what happens in America when you non-violently protest."
Eltahawy, who has written for this paper, was later charged with "criminal mischief" and "graffiti".
During the Arab spring, Eltahawy was arrested in Cairo and suffered an assault by riot police which left her with two broken arms.
The Metropolitan Transport Authority (MTA) had originally ruled it would not permit the posters because they were demeaning, but was compelled to take the $6,000 (£3,700) ad after Geller's group went to court.
Last month US district court judge Paul Engelmayer ruled that it is protected speech under the first amendment.
"Our hands are tied," New York subway spokesman Aaron Donovan said. "Under our existing ad standards as modified by the injunction, the MTA is required to run the ad."
The posters have attracted widespread condemnation including from Jewish figures. Among those who have spoken out against them is Rabbi Rachel Kahn-Troster, of Rabbis for Human Rights — North America, who wrote for CNN online: "As a rabbi, I find the ads deeply misguided and disturbing … The coded message makes clear who the savages are: those who support jihad, which in Geller's mind includes all Muslims. She has called Islam 'an extreme ideology, the most radical and extreme ideology on the face of the Earth'.
"As a Jew, I know the extreme to which baseless hatred can lead. And the Jewish community has been in the past a target of hatred in the United States. Geller's message ignores the positive contributions that our Muslim friends, neighbours and colleagues make to our country every single day.
"It is also unfortunate that Geller chooses to frame her message of hatred as one of support for Israel."
As head of a group called Stop Islamization of America, Geller, a rightwing blogger, helped spur a long campaign two years ago to remove a planned Islamic community centre near the World Trade Centre site, which she called the "Ground Zero Mosque".
Geller's group has also placed posters in other stations north of New York City that read: "It's not Islamophobia, it's Islamorealism."—www.shafaqna.com/English